Speech synthesis method, system, device and storage medium

文档序号:9758 发布日期:2021-09-17 浏览:42次 中文

1. A method of speech synthesis, comprising:

acquiring acoustic characteristics of a text to be synthesized on a plurality of channels, wherein different channels correspond to different acoustic frequency bands;

predicting the acoustic features on the channels by utilizing a neural network combined with linear prediction coding to obtain linear prediction parameters and nonlinear residuals on the channels;

and performing voice synthesis according to the linear prediction parameters and the nonlinear residuals on the plurality of channels to obtain synthetic voice corresponding to the text to be synthesized.

2. The method of claim 1, wherein the obtaining acoustic features of the text to be synthesized on a plurality of channels comprises:

receiving acoustic characteristics of a text to be synthesized on a plurality of channels, which are sent by a front-end module in a speech synthesis system; and the acoustic features of the text to be synthesized on the plurality of channels are obtained by the front-end module performing feature extraction on the text to be synthesized.

3. The method of claim 1, wherein the obtaining acoustic features of the text to be synthesized on a plurality of channels comprises:

acquiring initial voice corresponding to the text to be synthesized;

performing sub-band analysis on the initial voice by using filters corresponding to the channels to obtain voice signals on multiple channels;

and respectively extracting the characteristics of the voice signals on the multiple channels to obtain the acoustic characteristics on the multiple channels.

4. The method according to any one of claims 1-3, wherein predicting the acoustic features on the plurality of channels using a neural network combined with linear prediction coding to obtain linear prediction parameters and non-linear residuals on the plurality of channels comprises:

inputting the acoustic features on the plurality of channels into a multi-channel linear prediction network vocoder comprising a neural network incorporating linear prediction coding; and

and predicting the acoustic characteristics on the plurality of channels by using the multi-channel linear prediction network vocoder to obtain linear prediction parameters and nonlinear residuals on the plurality of channels.

5. The method of claim 4, wherein predicting the acoustic features over the plurality of channels using the multi-channel linear prediction network vocoder to obtain linear prediction parameters and non-linear residuals over the plurality of channels comprises:

performing feature conversion on the acoustic features on the channels by taking a frame as a unit to obtain a condition vector by utilizing a frame rate network in the multi-channel linear prediction network vocoder;

respectively carrying out linear prediction coding on the acoustic features on the channels to obtain linear prediction parameters on the channels;

predicting non-linear residuals over the plurality of channels using a network of sample rates in the multi-channel linear prediction network vocoder based on the condition vector and linear prediction parameters over the plurality of channels.

6. The method according to claim 5, wherein performing speech synthesis according to the linear prediction parameters and the non-linear residuals on the plurality of channels to obtain a synthesized speech corresponding to the text to be synthesized comprises:

in the multi-channel linear prediction network vocoder, for each channel, performing voice synthesis according to linear prediction parameters and nonlinear residual errors on the channel to obtain synthesized voice on the channel;

and overlapping the synthesized voice on the plurality of channels to obtain the synthesized voice corresponding to the text to be synthesized.

7. The method according to claim 6, wherein superimposing the synthesized speech on the multiple channels to obtain the synthesized speech corresponding to the text to be synthesized comprises:

up-sampling the synthesized voice on the plurality of channels to obtain the synthesized voice with the appointed sampling rate on the plurality of channels;

and overlapping the synthesized voice with the specified sampling rate on the plurality of channels to obtain the synthesized voice corresponding to the text to be synthesized.

8. The method of claim 6, wherein the acoustic features on each channel comprise: a plurality of sampling features; respectively performing linear predictive coding on the acoustic features on the channels to obtain linear predictive parameters on the channels, wherein the linear predictive parameters comprise:

and for each channel, performing linear prediction on the current sampling feature on the channel and the synthesized voice corresponding to the previous sampling feature on the channel to obtain a linear prediction parameter corresponding to the current sampling feature on the channel.

9. The method of claim 8, wherein predicting non-linear residuals over the plurality of channels using a network of sample rates in the multi-channel linear prediction network vocoder based on the condition vector and linear prediction parameters over the plurality of channels comprises:

and inputting the condition vector, the linear prediction parameters corresponding to the current sampling features on the channels, the synthesized voice corresponding to the previous sampling features on the channels and the nonlinear residual errors corresponding to the previous sampling features on the channels output by the sampling rate network into the sampling rate network for nonlinear prediction to obtain the nonlinear residual errors corresponding to the current sampling features on the channels.

10. The method of claim 9, wherein the sample rate network comprises: a main sampling rate network and a plurality of sub-sampling rate networks corresponding to the plurality of channels;

inputting the condition vector, the linear prediction parameters corresponding to the current sampling features on the multiple channels, the synthesized speech corresponding to the previous sampling features on the multiple channels, and the nonlinear residual errors corresponding to the previous sampling features on the multiple channels output by the sampling rate network into the sampling rate network for nonlinear prediction to obtain the nonlinear residual errors corresponding to the current sampling features on the multiple channels, including:

inputting the condition vector, the linear prediction parameters corresponding to the current sampling features on the channels, the synthesized voice corresponding to the previous sampling features on the channels and the nonlinear residual errors corresponding to the previous sampling features on the channels output by the sampling rate network into the main sampling rate network for vectorization processing to obtain parameter vectors;

and respectively inputting the parameter vectors into the multiple sub-sampling rate networks for residual error classification to obtain nonlinear residual errors corresponding to the current sampling characteristics on the multiple channels.

11. The method of claim 10, wherein the master sample rate network comprises, in order: connecting layer and door control circulating unit GRUAAnd gate control circulation unit GRUB(ii) a Each sub-sampling rate network comprises in sequence: a dual fully-connected layer, a classifier, and a sampling layer.

12. A multi-channel linear prediction network vocoder, comprising: the system comprises a frame rate network supporting multi-channel input, a plurality of linear predictive coders LPC, a sampling rate network supporting multi-channel input and a synthesis network;

the frame rate network is used for receiving acoustic features of a text to be synthesized on a plurality of channels, performing feature conversion on the acoustic features on the channels by taking a frame as a unit to obtain a condition vector, and outputting the condition vector to the sampling rate network;

the LPCs are used for respectively carrying out linear prediction coding on the acoustic features on the channels to obtain linear prediction parameters on the channels and outputting the linear prediction parameters to the sampling rate network and the synthesis network;

the sampling rate network is used for predicting nonlinear residuals on the multiple channels based on the condition vectors and linear prediction parameters on the multiple channels and outputting the nonlinear residuals to the synthesis network;

and the synthesis network is used for carrying out voice synthesis according to the linear prediction parameters and the nonlinear residuals on the plurality of channels to obtain the synthesized voice corresponding to the text to be synthesized.

13. The vocoder of claim 12, wherein the synthesis network comprises: a plurality of synthesizing sub-networks corresponding to the plurality of channels, and an overlay sub-network;

each synthesis sub-network is used for carrying out voice synthesis according to the linear prediction parameters and the nonlinear residual errors on the corresponding channels to obtain the synthesized voice on the corresponding channels and outputting the synthesized voice to the superposition sub-network;

and the superposition sub-network is used for superposing the synthesized voices on the channels to obtain the synthesized voice corresponding to the text to be synthesized.

14. The vocoder of claim 13, wherein the synthesis network further comprises:

and the up-sampling module is used for up-sampling the synthesized voices on the channels to obtain the synthesized voices with the specified sampling rate on the channels, and outputting the synthesized voices to the superposition sub-network so that the superposition sub-network superposes the synthesized voices with the specified sampling rate on the channels to obtain the synthesized voices corresponding to the text to be synthesized.

15. The vocoder of claim 14, wherein the acoustic features on each channel comprise: a plurality of sampling features; each LPC is specifically used for: and performing linear prediction on the current sampling feature on the channel corresponding to the synthesized voice and the synthesized voice corresponding to the previous sampling feature on the channel corresponding to the current sampling feature to obtain a linear prediction parameter corresponding to the current sampling feature on the channel corresponding to the synthesized voice.

16. The vocoder of claim 15, wherein the sample rate network is specifically configured to: and predicting the nonlinear residuals corresponding to the current sampling features on the channels according to the condition vector, the linear prediction parameters corresponding to the current sampling features on the channels, the synthesized voice corresponding to the previous sampling features on the channels and the nonlinear residuals corresponding to the previous sampling features on the channels output by the sampling rate network.

17. The vocoder of claim 16, wherein the sample rate network comprises: a main sampling rate network and a plurality of sub-sampling rate networks corresponding to the plurality of channels;

the main sampling rate network is configured to perform vectorization processing on the condition vector, the linear prediction parameters corresponding to current sampling features on the multiple channels, the synthesized speech corresponding to previous sampling features on the multiple channels, and the nonlinear residual errors corresponding to previous sampling features on the multiple channels output by the sampling rate network, so as to obtain a parameter vector, and output the parameter vector to the multiple sub-sampling rate networks;

and the sub-sampling rate networks are used for respectively carrying out residual error classification on the parameter vectors to obtain nonlinear residual errors corresponding to the current sampling characteristics on the channels.

18. The vocoder of claim 17, wherein the master sample rate network comprises, in order: connecting layer and door control circulating unit GRUAAnd gate control circulation unit GRUB(ii) a Each sub-sampling rate network comprises in sequence: a dual fully-connected layer, a classifier, and a sampling layer.

19. A speech synthesis apparatus, characterized by comprising: a memory and a processor; the memory is used for storing a computer program; the processor is coupled with the memory for executing the computer program for:

acquiring acoustic characteristics of a text to be synthesized on a plurality of channels, wherein different channels correspond to different acoustic frequency bands;

predicting the acoustic features on the channels by utilizing a neural network combined with linear prediction coding to obtain linear prediction parameters and nonlinear residuals on the channels;

and performing voice synthesis according to the linear prediction parameters and the nonlinear residuals on the plurality of channels to obtain synthetic voice corresponding to the text to be synthesized.

20. The device of claim 19, wherein the processor is specifically configured to:

inputting the acoustic features on the plurality of channels into a multi-channel linear prediction network vocoder comprising a neural network incorporating linear prediction coding; and

and predicting the acoustic characteristics on the plurality of channels by using the multi-channel linear prediction network vocoder to obtain linear prediction parameters and nonlinear residuals on the plurality of channels.

21. The device of claim 20, wherein the processor is specifically configured to:

performing feature conversion on the acoustic features on the channels by taking a frame as a unit to obtain a condition vector by utilizing a frame rate network in the multi-channel linear prediction network vocoder;

respectively carrying out linear prediction coding on the acoustic features on the channels to obtain linear prediction parameters on the channels;

predicting non-linear residuals over the plurality of channels using a network of sample rates in the multi-channel linear prediction network vocoder based on the condition vector and linear prediction parameters over the plurality of channels.

22. The device of claim 21, wherein the processor is specifically configured to:

in the multi-channel linear prediction network vocoder, for each channel, performing voice synthesis according to linear prediction parameters and nonlinear residual errors on the channel to obtain synthesized voice on the channel;

and overlapping the synthesized voice on the plurality of channels to obtain the synthesized voice corresponding to the text to be synthesized.

23. A method of speech synthesis, comprising:

receiving a voice synthesis request sent by terminal equipment, wherein the voice synthesis request comprises a text to be synthesized;

extracting features of the text to be synthesized to obtain acoustic features of the text to be synthesized on a plurality of channels;

predicting the acoustic features on the channels by utilizing a neural network combined with linear prediction coding to obtain linear prediction parameters and nonlinear residuals on the channels;

performing voice synthesis according to the linear prediction parameters and the nonlinear residual errors on the plurality of channels to obtain synthetic voice corresponding to the text to be synthesized; and

and returning the voice to be synthesized to the terminal equipment so that the terminal equipment can output the synthesized voice.

24. The method according to claim 23, before performing feature extraction on the text to be synthesized to obtain acoustic features of the text to be synthesized on a plurality of channels, further comprising:

judging whether a multi-channel speech synthesis scheme needs to be used or not according to the attribute of the text to be synthesized and/or the user attribute;

and if so, performing feature extraction on the text to be synthesized to obtain acoustic features of the text to be synthesized on a plurality of channels.

25. The method of claim 24, further comprising:

if the judgment result is negative, performing feature extraction on the text to be synthesized to obtain the acoustic features of the text to be synthesized on a single channel;

performing voice synthesis on the acoustic features on the single channel by using a single-channel linear prediction network vocoder to obtain synthetic voice corresponding to the text to be synthesized; and

and returning the voice to be synthesized to the terminal equipment so that the terminal equipment can output the synthesized voice.

26. The method of claim 25, further comprising: receiving information which is sent by the terminal equipment and requests to use a multi-channel speech synthesis scheme;

extracting the features of the text to be synthesized to obtain the acoustic features of the text to be synthesized on a plurality of channels specifically as follows: and according to the information of the multi-channel speech synthesis scheme, performing feature extraction on the text to be synthesized to obtain the acoustic features of the text to be synthesized on a plurality of channels.

27. A speech synthesis system, comprising: the system comprises terminal equipment and server equipment for voice synthesis;

the terminal device is used for sending a voice synthesis request to the server device, wherein the voice synthesis request comprises a text to be synthesized; receiving the synthesized voice corresponding to the text to be synthesized returned by the server-side equipment and outputting the synthesized voice;

the server-side equipment is used for receiving the voice synthesis request, extracting the characteristics of the text to be synthesized and obtaining the acoustic characteristics of the text to be synthesized on a plurality of channels; predicting the acoustic features on the channels by utilizing a neural network combined with linear prediction coding to obtain linear prediction parameters and nonlinear residuals on the channels; performing voice synthesis according to the linear prediction parameters and the nonlinear residual errors on the plurality of channels to obtain synthetic voice corresponding to the text to be synthesized; and returning the synthesized voice to the terminal equipment.

28. The system of claim 27, wherein the terminal device is further configured to: determining that a multi-channel speech synthesis scheme needs to be used according to the user attribute, the attribute of the text to be synthesized or the instruction of the user, and sending information requesting to use the multi-channel speech synthesis scheme to the server device;

alternatively, the first and second electrodes may be,

the server device is specifically configured to: and performing feature extraction on the text to be synthesized according to the information which is sent by the terminal equipment and requests to use a multi-channel speech synthesis scheme to obtain the acoustic features of the text to be synthesized on a plurality of channels.

29. The system of claim 27, wherein the server device is further configured to: judging whether a multi-channel speech synthesis scheme needs to be used or not according to the attribute of the text to be synthesized and/or the user attribute; and if so, performing feature extraction on the text to be synthesized to obtain the acoustic features of the text to be synthesized on a plurality of channels.

30. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by one or more processors, causes the one or more processors to perform the steps of the method of any one of claims 1-11 and 23-26.

Background

Speech synthesis, also known as Text to Speech (Text to Speech) technology, is a technology for generating artificial Speech by mechanical and electronic means. During speech synthesis, the front end and the middle end are responsible for predicting the compression characteristics of speech, such as Mel-Frequency Cepstral Coefficients (MFCC), etc., from text; and the synthesis of the audible speech from these compression features is done by a vocoder (vocoder).

A Linear predictive Network (LPCnet) vocoder is a variant model of WaveRNN combining Recurrent Neural Network (RNN) and Linear prediction, and combines deep learning and digital signal processing technologies, thereby greatly improving the speech synthesis quality and being widely applied to a speech synthesis system. However, the conventional LPCNet has certain calculation redundancy and low synthesis efficiency.

Disclosure of Invention

Aspects of the present disclosure provide a method, system, device and storage medium for processing a multi-channel signal, so as to improve speech synthesis efficiency while ensuring speech synthesis quality.

The embodiment of the application provides a speech synthesis method, which comprises the following steps: acquiring acoustic characteristics of a text to be synthesized on a plurality of channels, wherein different channels correspond to different acoustic frequency bands; predicting the acoustic features on the channels by utilizing a neural network combined with linear prediction coding to obtain linear prediction parameters and nonlinear residuals on the channels; and performing voice synthesis according to the linear prediction parameters and the nonlinear residuals on the plurality of channels to obtain synthetic voice corresponding to the text to be synthesized.

The embodiment of the present application further provides a multi-channel linear prediction network vocoder, including: the system comprises a frame rate network supporting multi-channel input, a plurality of linear predictive coders LPC, a sampling rate network supporting multi-channel input and a synthesis network; the frame rate network is used for receiving acoustic features of a text to be synthesized on a plurality of channels, performing feature conversion on the acoustic features on the channels by taking a frame as a unit to obtain a condition vector, and outputting the condition vector to the sampling rate network; the LPCs are used for respectively carrying out linear prediction coding on the acoustic features on the channels to obtain linear prediction parameters on the channels and outputting the linear prediction parameters to the sampling rate network and the synthesis network; the sampling rate network is used for predicting nonlinear residuals on the multiple channels based on the condition vectors and linear prediction parameters on the multiple channels and outputting the nonlinear residuals to the synthesis network; and the synthesis network is used for carrying out voice synthesis according to the linear prediction parameters and the nonlinear residuals on the plurality of channels to obtain the synthesized voice corresponding to the text to be synthesized.

An embodiment of the present application further provides a speech synthesis apparatus, including: a memory and a processor; the memory is used for storing a computer program; the processor is coupled with the memory for executing the computer program for: acquiring acoustic characteristics of a text to be synthesized on a plurality of channels, wherein different channels correspond to different acoustic frequency bands; predicting the acoustic features on the channels by utilizing a neural network combined with linear prediction coding to obtain linear prediction parameters and nonlinear residuals on the channels; and performing voice synthesis according to the linear prediction parameters and the nonlinear residuals on the plurality of channels to obtain synthetic voice corresponding to the text to be synthesized.

The embodiment of the present application further provides a speech synthesis method, including: receiving a voice synthesis request sent by terminal equipment, wherein the voice synthesis request comprises a text to be synthesized; extracting features of the text to be synthesized to obtain acoustic features of the text to be synthesized on a plurality of channels; predicting the acoustic features on the channels by utilizing a neural network combined with linear prediction coding to obtain linear prediction parameters and nonlinear residuals on the channels; performing voice synthesis according to the linear prediction parameters and the nonlinear residual errors on the plurality of channels to obtain synthetic voice corresponding to the text to be synthesized; and returning the voice to be synthesized to the terminal equipment so that the terminal equipment can output the synthesized voice.

An embodiment of the present application further provides a speech synthesis system, including: the system comprises terminal equipment and server equipment for voice synthesis; the terminal device is used for sending a voice synthesis request to the server device, wherein the voice synthesis request comprises a text to be synthesized; receiving the synthesized voice corresponding to the text to be synthesized returned by the server-side equipment and outputting the synthesized voice; the server-side equipment is used for receiving the voice synthesis request, extracting the characteristics of the text to be synthesized and obtaining the acoustic characteristics of the text to be synthesized on a plurality of channels; predicting the acoustic features on the channels by utilizing a neural network combined with linear prediction coding to obtain linear prediction parameters and nonlinear residuals on the channels; performing voice synthesis according to the linear prediction parameters and the nonlinear residual errors on the plurality of channels to obtain synthetic voice corresponding to the text to be synthesized; and returning the synthesized voice to the terminal equipment.

Embodiments of the present application also provide a computer-readable storage medium storing a computer program, which, when executed by one or more processors, causes the one or more processors to perform the steps of the method of embodiments of the present application.

In the embodiment of the application, a multi-channel linear prediction network vocoder is provided, which supports multi-channel input, and by acquiring acoustic characteristics of a text to be synthesized on a plurality of channels, a voice signal corresponding to the text to be synthesized can be synthesized by using the multi-channel linear prediction network vocoder; the speech synthesis based on linear prediction can ensure the speech synthesis quality, and meanwhile, the speech synthesis efficiency can be improved by means of the advantage of multiple channels.

Drawings

The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:

fig. 1a is a schematic structural diagram of a multi-channel linear prediction network vocoder according to an exemplary embodiment of the present disclosure;

fig. 1b is a schematic structural diagram of another multi-channel linear prediction network vocoder according to an exemplary embodiment of the present application;

fig. 1c is a schematic structural diagram of another multi-channel linear prediction network vocoder according to an exemplary embodiment of the present application;

fig. 1d is a schematic structural diagram of another multi-channel linear prediction network vocoder according to an exemplary embodiment of the present application;

FIG. 2a is a flow chart of a speech synthesis method provided by an exemplary embodiment of the present application;

FIG. 2b is a flow chart of another speech synthesis method provided by an exemplary embodiment of the present application;

FIG. 2c is a flowchart of another speech synthesis method provided by an exemplary embodiment of the present application;

FIG. 2d is a flowchart of another speech synthesis method provided by an exemplary embodiment of the present application;

FIG. 3 is a schematic diagram of a speech synthesis system according to an exemplary embodiment of the present application;

fig. 4 is a schematic structural diagram of a speech synthesis apparatus according to an exemplary embodiment of the present application.

Detailed Description

In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

Aiming at the technical problem of low speech synthesis rate in the prior art, in the embodiment of the application, a multi-channel linear prediction network (LPCNet) vocoder is provided, the multi-channel linear prediction network vocoder comprises a neural network combined with linear prediction coding and supports multi-channel input, and by acquiring acoustic characteristics of a text to be synthesized on a plurality of channels, a speech signal corresponding to the text to be synthesized can be synthesized by using the multi-channel linear prediction network vocoder; the speech synthesis based on linear prediction can ensure the speech synthesis quality, and meanwhile, the speech synthesis efficiency can be improved by means of the advantage of multiple channels.

The implementation structure of the multi-channel linear prediction network vocoder and the speech synthesis process based on the same provided by the embodiments of the present application will be described in detail below with reference to the accompanying drawings and specific embodiments.

Fig. 1a is a schematic structural diagram of a multi-channel linear prediction network (LPCNet) vocoder according to an embodiment of the present disclosure. As shown in fig. 1a, the multi-channel LPCNet vocoder 100 includes: a Frame Rate Network (Frame Rate Network)10, a Sampling Rate Network (Sampling Rate Network)20, a synthesis Network 30, and a plurality of Linear Predictive Coding (LPC) 40.

The multi-channel LPCNet vocoder 100 of the present embodiment combines Digital Signal Processing (DSP) technology and Neural Network (NN) technology, and is mainly used for performing speech synthesis according to acoustic features. The LPCNet vocoder 100 of this embodiment employs a linear predictive coding technique in the speech synthesis process, and then synthesizes speech according to the linear predictive parameters obtained by the linear predictive coding. The principle of linear predictive coding is as follows: the speech is approximated by a linear combination of a set of speech samples at past times, and the linear prediction parameters at the current time are determined based on the principle that the sum of the squares of the differences between the actual sample values and the linear prediction is minimal.

In the process of speech synthesis, linear prediction coding techniques can be used to determine linear prediction parameters because: the short-time invariant characteristic of the voice signal, the model of the pronunciation organ can be approximated to a linear time invariant system, so that the voice signal generation can be equivalent to unit pulse sequence excitation of the sound channel tube, and the corresponding differential equation is as follows:i.e. the presentThe speech sample values may be approximately linearly represented by sample values of several adjacent historical time instants. Where x (n) is the nth speech sample value (representing the current sample value), aiFor linear prediction parameters, e (n) is the non-linear residual, and j is the maximum number of historical sample values. Therefore, the speech signal can be reversely synthesized by combining the model expressed by the formula with the knowledge of the linear prediction parameters. The linear predictive coding technology can provide very accurate voice parameter prediction, and is beneficial to improving the quality of voice synthesis.

Further, the multi-channel LPCNet vocoder 100 of the present embodiment supports a plurality of channels as input, and performs speech synthesis processing on the acoustic features of each channel, and finally fuses the speech synthesis results of the plurality of channels to obtain full-band speech (i.e. final speech synthesis result). Wherein, a plurality means two or more. The number of channels supported by the multi-channel LPCNet vocoder 100 may be an odd number or an even number. Each channel corresponds to one acoustic frequency band, and different channels correspond to different acoustic frequency bands; different acoustic bands carry different acoustic characteristics, which carry different information required for speech synthesis. For example, the acoustic features in the low frequency band carry the speech content required for speech synthesis (i.e. what is being said can be known by the acoustic features in the low frequency band), and the acoustic features in the high frequency band are related to the speech quality, which is beneficial to improving the sound quality of the synthesized speech.

In the present embodiment, the multi-channel LPCNet vocoder 100 supports multiple channels, and in view of this, the acoustic features required for speech synthesis can be divided into acoustic features on multiple channels according to frequency, and the acoustic features on different acoustic bands can be processed separately by virtue of the multi-channel LPCNet vocoder 100 supporting multiple channels. Compared with the complete acoustic features required by speech synthesis, the sampling rate of the acoustic features on different channels is low, the number of the acoustic features is much smaller, and in the speech synthesis process, the acoustic features on each channel only need to be processed, so that the calculation redundancy can be reduced, and the efficiency of speech synthesis can be improved.

The process of speech synthesis mainly refers to conversion from text to speech. In view of this, the acoustic features of the text to be synthesized on the multiple channels can be obtained, the acoustic features of the text to be synthesized on the multiple channels are input into the multi-channel LPCNet vocoder 100, the multi-channel LPCNet vocoder 100 is used to perform speech synthesis on the acoustic features on the channels respectively to obtain the synthesized speech on the channels, and finally the synthesized speech on the multiple channels is fused to obtain the synthesized speech corresponding to the text to be synthesized. The acoustic feature of each channel comprises a plurality of sampling features, i.e. the acoustic feature on each channel is composed of a series of sampling features.

The principle of the multi-channel LPCnet vocoder 100 provided by the present embodiment for voice synthesis is as follows: predicting acoustic features on a plurality of channels by utilizing a neural network combined with linear prediction coding to obtain linear prediction parameters and nonlinear residual errors on the plurality of channels; and then, performing voice synthesis according to the linear prediction parameters and the nonlinear residuals on the plurality of channels to obtain synthetic voice corresponding to the text to be synthesized. In the multi-channel LPCnet vocoder 100, the neural network is divided into a frame rate network 10 and a sampling rate network 20, and the two networks are mutually matched to predict nonlinear residuals on a plurality of channels; further, the multi-channel LPCNet vocoder 100 further includes: a plurality of LPCs 40 and a synthesis network 30; the plurality of LPCs 40 are responsible for predicting linear prediction parameters over the plurality of channels; the synthesis network 30 is responsible for performing speech synthesis according to the linear prediction parameters and the nonlinear residuals on the multiple channels to obtain a synthesized speech corresponding to the text to be synthesized.

For the purpose of supporting multi-channel input, the frame rate network 10, the sampling rate network 20 and the synthesis network 30 each support multi-channel input and include a plurality of LPCs 40 adapted to a plurality of channels, as shown in fig. 1a, which is illustrated by taking n channels as an example, where n is a positive integer greater than 1. Within the multi-channel LPCnet vocoder 100 shown in FIG. 1a, the acoustic features f1-fn of the text to be synthesized on n channels are input into the frame rate network 10 and the plurality of LPCs 40, and the acoustic feature on one channel is input into one LPC 40. The n linear predictive encoders LPC40 are configured to perform linear predictive encoding on the acoustic features f1-fn on the n channels, respectively, to obtain linear predictive parameters p1-pn on the n channels, and output the parameters to the sampling rate network 20 and the synthesis network 30. The frame rate network 10 is configured to receive acoustic features f1-fn of a text to be synthesized on a plurality of channels, perform feature conversion on the acoustic features f1-fn on the n channels by using a frame as a unit to obtain a condition vector c, and output the condition vector c to the sampling rate network 20. The main role of the frame rate network 10 is to provide a condition vector c to the sample rate network 20, which is calculated once a frame (also referred to as a frame feature) and remains unchanged for the frame time. In this embodiment, the time length of one frame is not limited, and may be, for example, 10ms or 20ms, and may be flexibly set according to actual circumstances. The number of the sampling features f contained in one frame is different according to the sampling rate of the acoustic features f.

In the present embodiment, an implementation of acquiring acoustic features of a text to be synthesized on a plurality of channels is not limited. For example, the multi-channel LPCnet vocoder 100 of the present embodiment may be deployed separately and cooperate with a front-end module in a speech synthesis system to perform a speech synthesis process. The front-end module can receive a text to be synthesized, perform feature extraction on the text to be synthesized to obtain acoustic features of the text to be synthesized on multiple channels, and further transmit the acoustic features of the text to be synthesized on the multiple channels to the multi-channel LPCnet vocoder 100 for voice synthesis to obtain synthesized voice corresponding to the text to be synthesized. For another example, the multi-channel LPCNet vocoder 100 of this embodiment may be integrated in a speech synthesis system, where the speech synthesis system receives a speech synthesis request sent by a terminal device, performs feature extraction on a text to be synthesized carried in the speech synthesis request to obtain acoustic features of the text to be synthesized on multiple channels, and then performs speech synthesis on the acoustic features on the multiple channels by using the multi-channel LPCNet vocoder 100 to obtain a synthesized speech corresponding to the text to be synthesized. For another example, the multi-channel LPCNet vocoder 100 according to the embodiment of the present application may be deployed separately and cooperate with a speech preprocessing system to implement a speech synthesis process. The voice preprocessing system can receive a text to be synthesized and acquire initial voice corresponding to the text to be synthesized; performing sub-band analysis on the initial voice by using filters corresponding to the channels to obtain voice signals on the channels; the voice signals on multiple channels are respectively subjected to feature extraction to obtain acoustic features on multiple channels, then the acoustic features on the multiple channels are provided to a multi-channel LPCnet vocoder 100, the multi-channel LPCnet vocoder 100 is used for carrying out voice synthesis on the acoustic features on the multiple channels to obtain synthesized voice corresponding to the text to be synthesized

In the embodiment of the present application, the sampling rate network 20 may receive the condition vector c output by the frame rate network 10 and the linear prediction parameters p1-pn on n channels output by the n LPCs 40, predict the non-linear residuals e1-en on n channels based on the condition vector c and the linear prediction parameters p1-pn on n channels, and output to the synthesis network 30. The non-linear residual on each channel refers to a difference value between an actual speech signal corresponding to the acoustic feature on the channel and a predicted speech signal. The synthesis network 30 is configured to perform speech synthesis according to the linear prediction parameters p1-pn and the nonlinear residuals e1-en on the n channels, and obtain a synthesized speech s corresponding to the text to be synthesized. In the synthesis network 30, speech synthesis may be performed according to the linear prediction parameters and the nonlinear residuals on each channel to obtain a synthesized speech on each channel; and then, overlapping the synthesized speech s1-sn on the channels to obtain the synthesized speech s corresponding to the text to be synthesized.

The multi-channel LPCnet vocoder 100 provided by the present embodiment can be applied to various application scenarios requiring speech synthesis. For example, the method can be applied to an online talking reading scene, and is responsible for synthesizing text contents in an electronic book into voice signals in real time and outputting the voice signals to a reader. For another example, the method can be applied to various instant messaging software, and all or part of text content in a chat window can be converted into voice signals according to the requirements of the chat user and output the voice signals to the chat user. In any application scenario, the multi-channel LPCNet vocoder 100 provided in this embodiment may be implemented on the terminal device side, or may be deployed on the service device. The server device may be a conventional server, a cloud server, a server array, or a virtual machine, a container, etc. deployed in the server. When the multi-channel LPCNet vocoder 100 is deployed on a server device for implementation, the terminal device may upload a text to be synthesized to the server device; the server side equipment extracts acoustic features on multiple channels from the text to be synthesized; the voice signal is then synthesized using the multi-channel LPCNet vocoder 100 and then returned to the terminal device where it is played to the user. Taking an electronic book as an example, the electronic book submits text content read by a user to a server device through a network, the server device adopts a multi-channel LPCnet vocoder 100 to synthesize the text content into a voice signal, and the voice signal is returned to the electronic book through the network; the electronic book plays the voice signal to the reader through the audio module of the electronic book. In the multi-channel LPCNet vocoder 100, speech synthesis based on linear prediction can ensure speech synthesis quality, and at the same time, speech synthesis efficiency can be improved by virtue of multi-channels.

In the embodiment of the present application, the internal implementation structures of the frame rate network 10, the sampling rate network 20, and the synthesis network 30 are not limited, and any internal structures that can implement corresponding functions are applicable to the embodiment of the present application. In the following embodiments, internal implementation structures of the frame rate network 10, the sampling rate network 20, and the synthesis network 30 will be given exemplarily.

In the embodiment of the present application, the acoustic feature on each channel comprises a plurality of sampling features. Within each LPC40, calculations are made in units of sampled features. When performing linear prediction, each LPC40 may perform linear prediction on the current sampling feature of its corresponding channel and the synthesized speech corresponding to the previous sampling feature of its corresponding channel, so as to obtain a linear prediction parameter corresponding to the current sampling feature of its corresponding channel. As shown in FIG. 1b, the input to frame rate network 10 is acoustic features f1-fn on n channels, and the output is a condition vector c; the input of each LPC40 is the acoustic feature on each channel and the output is the corresponding linear prediction parameters on each channel, then the input of n LPCs 40 is the acoustic features f1-fn on n channels and the output is the linear prediction parameters p1-pn on n channels. Accordingly, the sampling rate network 20, when predicting the non-linear residuals over n channels, is specifically configured to: according to the condition vector c, the linear pre-stage corresponding to the current sampling feature on n channelsMeasuring p1-pn, synthetic speech s1 corresponding to previous sampling feature on n channels-1-sn-1And the non-linear residual e1 corresponding to the previous sampling feature on the n channels output by the sampling rate network 20-1-en-1And predicting the nonlinear residual e1-en corresponding to the current sampling features on the n channels. As shown in FIG. 1b, the input of the sample rate network 20 is the condition vector c of n channels, the linear prediction parameter p1-pn corresponding to the current sampling feature, and the synthesized speech s1 corresponding to the previous sampling feature-1-sn-1And the non-linear residual e1 corresponding to the previous sampling feature on the n channels output by the sampling rate network 20-1-en-1The output is the non-linear residual e1-en corresponding to the current sample feature on the n channels.

Further, as shown in fig. 1b, the sampling Rate Network 20 includes a Main sampling Rate Network (Main Sample Rate Network)21 and a plurality of Sub-sampling Rate networks (Sub Sample Rate networks) 22 corresponding to the plurality of channels. Wherein the main sampling rate network 21 can be based on the conditional vector c, the linear prediction parameters p1-pn corresponding to the current sampling features on multiple channels, and the synthesized speech s1 corresponding to the previous sampling features on multiple channels-1-sn-1And a non-linear residual e1 corresponding to a previous sample feature on a plurality of channels output by the sample rate network 20-1-en-1Vectorization processing is carried out to obtain a parameter vector q, and the obtained parameter vector q is output to the sub-sampling rate network 22; each sub-sampling rate network 22 is configured to receive a parameter vector q output by the main sampling rate network 21, perform residual classification on the parameter vector q, and obtain a non-linear residual e1-en corresponding to a current sampling feature on a corresponding channel.

Further, as shown in fig. 1c and 1d, frame rate network 10 includes: two filter-size 3 convolutional layers 11 and 12, one residual link layer 13 and two full link layers 14 and 15. Wherein, the acoustic features f1-fn on the n channels firstly pass through two convolution layers (conv 3x1)11 and 12 with the filter size of 3 to generate 5 frames of receptive fields (two frames before and two frames after); the outputs of the two convolutional layers are added to a residual connected layer 13, which residual connected layer 13 is followed by two fully connected layers 14 and 15, so that the frame rate network 10 can output a condition vector c of a certain dimension (e.g., 128 dimensions) for use by the sample rate network 20. Where the condition vector c remains unchanged for the duration of each frame.

Further, as shown in fig. 1c and fig. 1d, the main sample rate network 21 includes in sequence: connection layer 211, door control circulation unit GRUA212 and a gate control loop unit GRUB213; wherein, the connection layer 211 is used for the condition vector c, the linear prediction parameters p1-pn corresponding to the current sampling feature on n channels, and the synthesized speech s1 corresponding to the previous sampling feature on n channels-1-sn-1And the non-linear residual e1 corresponding to the previous sampling feature on the n channels output by the sampling rate network 20-1-en-1And connecting to form a feature vector. Door control circulating unit GRUA212 and a gate control loop unit GRUB213, the connected feature vectors are circularly calculated to finally obtain the parameter vector q. As shown in fig. 1c, each sub-sample rate network 22 comprises in turn: dual fully connected layer (dualFC)221, classifier (softMax)222, and Sampling layer (Sampling) 223. The dual full-link layer 221 is configured to predict a speech synthesis value of each channel, determine whether the speech synthesis value of each channel is within a preset threshold range, and if so, determine that the predicted value is the speech synthesis value of each channel; further, the classifier 222 calculates the speech synthesis value to generate a probability distribution function, and the sampling layer 223 samples according to the probability distribution function to obtain a non-linear residual prediction value corresponding to the sampling feature of the corresponding channel, and outputs the non-linear residual prediction value to the synthesis network 30.

As shown in fig. 1 b-1 d, the synthesizing network 30 includes n synthesizing sub-networks 31 corresponding to the n channels, and a superimposing sub-network 32 corresponding to the n channels. Each synthesizing sub-network 31 is configured to perform speech synthesis calculation according to the linear prediction parameters and the nonlinear residuals on the corresponding channel, obtain synthesized speech on the corresponding channel, and output the synthesized speech to the superimposing sub-network 32. The superimposing sub-network 32 is configured to superimpose the synthesized speech s1-sn on the n channels, so as to obtain the synthesized speech s corresponding to the text to be synthesized. In FIGS. 1 b-1 d, the synthesized speech s1 corresponding to the feature of the previous sample is output by each synthesis subnetwork 31-1-sn-1The illustration is for the sake of example. In this embodiment, the implementation structures of the synthesizing sub-network 31 and the superimposing sub-network 32 are not limited, and fig. 1b to 1d are only examples and are not limited thereto.

In an alternative embodiment, as shown in fig. 1d, the synthesis network 30 further comprises an upsampling module 33 corresponding to the n channels, for upsampling the synthesized speech s1-sn on the n channels to obtain the synthesized speech s with the specified sampling rate on the n channels. The synthesized speech on each channel is up-sampled by the up-sampling module 33, which is beneficial to obtaining the synthesized speech meeting the requirement of the specified sampling rate. For example, the synthesized voice on each channel is 4KHz, and by up-sampling 4 times, the synthesized voice of 16KHz can be finally obtained.

Further, in an alternative embodiment, the multi-channel linear prediction network vocoder can be deployed separately and cooperate with the voice preprocessing system to implement the voice synthesis process. Based on this, as shown in fig. 1d, frame rate network 10 further includes: a speech preprocessing system 50; the speech preprocessing system 50 includes: a filter Analysis layer (Analysis Filterbank)51, a downsampling module 52 and a Feature Extractor (Feature Extractor) 53. The filtering analysis layer 51 is configured to receive full-band initial speech s0, perform sub-band analysis processing on the full-band initial speech s0, and obtain speech signals on n channels s0_1-s0_ n; further, the feature extractor 53 performs feature extraction on the speech signals s0_1-s0_ n on the n channels to obtain acoustic features f1-fn on the n channels; in this case, the acoustic signatures f1-fn on the n channels are fed on the one hand into the frame rate network 10 and on the other hand into the n LPCs 40. Further optionally, before feature extraction is performed on the speech signals s0_1-s0_ n on the n channels, downsampling processing can be performed on the speech signals s0_1-s0_ n on the n channels through the downsampling module 52, so that the number of sampling values on each channel is reduced, and the calculation amount is saved.

In the embodiment of the application, the multi-channel linear prediction network vocoder supports multi-channel input, and the voice signal corresponding to the text to be synthesized can be synthesized by inputting the acoustic characteristics of the text to be synthesized on a plurality of channels into the multi-channel linear prediction network vocoder; the speech synthesis based on linear prediction can ensure the speech synthesis quality, and meanwhile, the speech synthesis efficiency can be improved by means of the advantage of multiple channels.

Finally, the multi-channel linear prediction network vocoder provided by the embodiment of the present application may be implemented by hardware or software. In the case of hardware implementation, each network in fig. 1a is a hardware module, which may be implemented by, for example, a CPLD or an FPGA, but is not limited thereto. In the case of software implementation, each network in fig. 1a is a software module, and the multi-channel linear prediction network vocoder may be implemented as a computer program, which may be an application program, program code, plug-in, SDK, or the like, for example.

The embodiment of the application also provides a speech synthesis method realized based on the neural network combined with the linear predictive coding. As shown in fig. 2a, the method comprises:

21a, obtaining acoustic characteristics of the text to be synthesized on a plurality of channels, wherein different channels correspond to different acoustic frequency bands.

And 22a, predicting the acoustic features on the channels by utilizing a neural network combined with linear prediction coding to obtain linear prediction parameters and nonlinear residuals on the channels.

And 23a, performing voice synthesis according to the linear prediction parameters and the nonlinear residuals on the plurality of channels to obtain synthetic voice corresponding to the text to be synthesized.

In the embodiment, a multi-channel mode is adopted for voice synthesis, and compared with a single-channel voice synthesis mode, the voice synthesis efficiency can be improved; furthermore, the linear predictive coding technology is combined with the neural network, and the neural network combined with the linear predictive coding is adopted to carry out multi-channel speech synthesis, thereby being beneficial to ensuring the speech synthesis quality.

In an alternative embodiment, a multi-channel linear prediction network vocoder is provided, which may be implemented in hardware or software. Whether the multi-channel linear prediction network vocoder is implemented by hardware or software, the multi-channel linear prediction network vocoder comprises a neural network combined with linear prediction coding, so that the voice synthesis method of the embodiment of the application can be implemented by the multi-channel linear prediction network vocoder. The method comprises the steps of sending acoustic features of a text to be synthesized on a plurality of channels into a multi-channel linear prediction network vocoder, predicting the acoustic features on the plurality of channels by the multi-channel linear prediction network vocoder to obtain linear prediction parameters and nonlinear residual errors on the plurality of channels, and further utilizing the multi-channel linear prediction network vocoder to perform acoustic features on the plurality of channels according to the linear prediction parameters and the nonlinear residual errors on the plurality of channels.

In this embodiment, the speech synthesis method or the deployment implementation of the multi-channel linear prediction network vocoder provided in this embodiment is not limited. The speech synthesis method or the deployment implementation of the multi-channel linear prediction network vocoder provided in this embodiment is different, and the implementation of obtaining the acoustic features of the text to be synthesized in the multiple channels in step 21a is different.

In an alternative embodiment, the speech synthesis method or the multi-channel linear prediction network vocoder provided in this embodiment may be deployed and implemented separately, and cooperate with a front-end module in a speech synthesis system to implement a speech synthesis process. Based on this, as shown in fig. 2b, another speech synthesis method includes:

21b, receiving acoustic characteristics of a text to be synthesized on a plurality of channels, which are sent by a front-end module in the voice synthesis system; the acoustic features of the text to be synthesized on the channels are obtained by extracting the features of the text to be synthesized by the front-end module.

And 22b, predicting the acoustic features on the channels by utilizing the neural network combined with the linear prediction coding to obtain linear prediction parameters and nonlinear residuals on the channels.

And 23b, performing voice synthesis according to the linear prediction parameters and the nonlinear residuals on the plurality of channels to obtain synthetic voice corresponding to the text to be synthesized.

In another alternative embodiment, the speech synthesis method or the multi-channel linear prediction network vocoder provided by the present embodiment may be integrated into a speech synthesis system. Based on this, as shown in fig. 2c, another speech synthesis method includes:

and 21c, receiving a voice synthesis request sent by the terminal equipment, wherein the voice synthesis request comprises a text to be synthesized.

And 22c, performing feature extraction on the text to be synthesized to obtain the acoustic features of the text to be synthesized on a plurality of channels.

And 23c, predicting the acoustic characteristics on the channels by utilizing the neural network combined with the linear prediction coding to obtain linear prediction parameters and nonlinear residuals on the channels.

And 24c, performing voice synthesis according to the linear prediction parameters and the nonlinear residuals on the plurality of channels to obtain synthetic voice corresponding to the text to be synthesized.

And 25c, returning the voice to be synthesized to the terminal equipment so that the terminal equipment can output the synthesized voice.

In yet another alternative embodiment, the speech synthesis method or the multi-channel linear prediction network vocoder provided by the present embodiment may be deployed separately and cooperate with a speech preprocessing system to implement a speech synthesis process. The speech synthesis method can be applied to scenes such as speech recognition, speech coding, speaker recognition and the like. Based on this, as shown in fig. 2d, another speech synthesis method includes:

and 21d, acquiring initial voice corresponding to the text to be synthesized.

And 22d, performing sub-band analysis on the initial voice by using filters corresponding to the channels to obtain voice signals on the channels.

And 23d, respectively extracting the characteristics of the voice signals on the multiple channels to obtain the acoustic characteristics on the multiple channels.

And 24d, predicting the acoustic features on the channels by utilizing the neural network combined with the linear prediction coding to obtain linear prediction parameters and nonlinear residuals on the channels.

And 25d, performing voice synthesis according to the linear prediction parameters and the nonlinear residuals on the plurality of channels to obtain synthetic voice corresponding to the text to be synthesized.

In the embodiment shown in fig. 2a, the speech preprocessing system may be configured to receive a text to be synthesized, and obtain an initial speech corresponding to the text to be synthesized; the voice preprocessing system can comprise a filter, a sampling module and a feature extractor, and can perform sub-band analysis on the initial voice through the filter to obtain a voice signal on multiple channels; and then, the voice signals on multiple channels can be respectively sampled and extracted through a sampling module and a feature extractor, so that acoustic features on multiple channels are obtained.

In any of the above embodiments, after obtaining the acoustic features of the text to be synthesized in multiple channels, the acoustic features in the multiple channels may be submitted to a multi-channel linear prediction network vocoder, the multi-channel linear prediction network vocoder includes a neural network combined with linear prediction coding, the acoustic features in the multiple channels are predicted to obtain linear prediction parameters and non-linear residuals in the multiple channels, and further, speech synthesis is performed according to the linear prediction parameters and the non-linear residuals in the multiple channels to obtain synthesized speech corresponding to the text to be synthesized.

In the above alternative embodiment, no matter what deployment implementation is adopted by the multi-channel linear prediction network vocoder, the speech synthesis process is the same without affecting the speech synthesis thereof using the acoustic features on multiple channels.

Further, the process of predicting the acoustic features on the channels by using a multi-channel linear prediction network vocoder to obtain linear prediction parameters and nonlinear residuals on the channels includes: the multi-channel linear prediction network vocoder performs feature conversion on acoustic features on multiple channels by taking a frame as a unit by using a frame rate network to obtain a condition vector, wherein the time length of one frame is not limited, and the time length can be 10ms, 20ms and the like, and can be flexibly set according to actual conditions; the number of sampling features contained in a frame varies according to the sampling rate of the acoustic features. On the other hand, since the speech signal has a short-time invariant characteristic and can be approximated to be linear time invariant, the acoustic features on a plurality of channels are respectively subjected to linear prediction coding, the acoustic feature on each channel is approximated by a linear combination of a set of sampling features at past time, that is, the sampling value of the current acoustic feature can be approximated to be linearly represented by sampling values at a plurality of adjacent historical times, and linear prediction parameters on the plurality of channels can be obtained according to the principle that the square sum of the difference between the actual sampling value and the linear prediction is minimum, for example: pitch, formants, sound spectrum, vocal tract area functions, etc. required for speech synthesis. Knowing the linear prediction parameters, the speech signal can be deduced and synthesized in a backward direction, and linear prediction coding can provide very accurate prediction of the speech parameters, which is beneficial to improving the quality of speech synthesis. Further, based on the condition vector and the linear prediction parameters on the multiple channels, the nonlinear residuals on the multiple channels can be predicted by using a sampling rate network in a multi-channel linear prediction network vocoder; the non-linear residual on each channel refers to a difference value between an actual speech signal corresponding to the acoustic feature on the channel and a predicted speech signal.

After the linear prediction parameters and the non-linear residual errors on the channels are obtained, in the multi-channel linear prediction network vocoder, speech synthesis is further performed according to the linear prediction parameters and the non-linear residual errors on the channels, and synthetic speech corresponding to the text to be synthesized is obtained.

Optionally, the performing speech synthesis according to the linear prediction parameters and the nonlinear residuals on the multiple channels to obtain a synthesized speech corresponding to the text to be synthesized includes: in a multi-channel linear prediction network vocoder, for each channel, performing voice synthesis according to linear prediction parameters and nonlinear residual errors on the channel to obtain synthesized voice on the channel; and overlapping the synthesized voices on the channels to obtain the synthesized voice corresponding to the text to be synthesized. Further optionally, the superimposing the synthesized voices in the multiple channels to obtain the synthesized voice corresponding to the text to be synthesized includes: up-sampling the synthesized voice on each channel to obtain the synthesized voice with a specified sampling rate on each channel; for example, the synthesized voice on each channel is 4KHz, and 16KHz synthesized voice can be obtained finally through up-sampling by 4 times; and finally, overlapping the synthesized voices with the specified sampling rates on the channels to obtain the synthesized voice corresponding to the text to be synthesized.

In the embodiment of the present application, the acoustic features on each channel include a plurality of sampling features, and therefore, when the linear prediction parameters on the plurality of channels are calculated by performing linear prediction coding on the acoustic features on the plurality of channels, the calculation is performed in units of the sampling features. Specifically, linear prediction may be performed on the synthesized speech corresponding to the current sampling feature on each channel and the previous sampling feature on the channel, so as to obtain a linear prediction parameter corresponding to the current sampling feature on the channel corresponding thereto. Accordingly, predicting non-linear residuals over a plurality of channels using a sample rate network in a multi-channel linear prediction network vocoder based on the condition vector and linear prediction parameters over the plurality of channels, comprising: and inputting the condition vector, the linear prediction parameters corresponding to the current sampling features on the channels, the synthesized voice corresponding to the previous sampling features on the channels and the nonlinear residual errors corresponding to the previous sampling features on the channels output by the sampling rate network into the sampling rate network for nonlinear prediction to obtain the nonlinear residual errors corresponding to the current sampling features on the channels.

In an embodiment of the present application, calculating the non-linear residuals corresponding to the current sampling features on the plurality of channels is implemented by a sampling rate network, wherein the sampling rate network includes a main sampling rate network and a plurality of sub-sampling rate networks corresponding to the plurality of channels. Inputting the condition vector, the linear prediction parameters corresponding to the current sampling features on the channels, the synthesized voice corresponding to the previous sampling features on the channels and the nonlinear residual errors corresponding to the previous sampling features on the channels output by the sampling rate network into the main sampling rate network for vectorization processing to obtain parameter vectors; and then, the parameter vectors are respectively input into the sub-sampling rate networks corresponding to the channels to carry out residual error classification, so that the nonlinear residual errors corresponding to the current sampling characteristics on the channels can be obtained.

In the embodiment of the present application, the main sampling rate network sequentially includes a connection layer, a gate control loop unit GRUAAnd gate control circulation unit GRUB(ii) a Each sub-sampling rate network comprises a dual full-link layer, a classifier and a sampling layer in sequence. In the primary sampling network, the connection layer is used for conditional directionAnd connecting the quantity, the linear prediction parameters corresponding to the current sampling features on the channels, the synthesized voice corresponding to the previous sampling features on the channels and the nonlinear residual errors corresponding to the previous sampling features on the channels output by the sampling rate network to form a feature vector. Door control circulating unit GRUAAnd gate control circulation unit GRUBAnd the method is used for circularly calculating the connected feature vectors to finally obtain the parameter vectors. In the sub-sampling network, the dual full-connection layer is used for pre-estimating a voice synthesis value of each channel, judging whether the voice synthesis value of each channel is within a preset threshold range, and if so, determining the pre-estimated value as the voice synthesis value of each channel; and then, the classifier calculates the speech synthesis value to generate a probability distribution function, and the sampling layer samples according to the probability distribution function to obtain the nonlinear residual prediction value corresponding to the sampling characteristic of the corresponding channel.

In the embodiment of the application, the voice synthesis method realized based on the multi-channel linear prediction network vocoder can input the acoustic characteristics of the text to be synthesized on a plurality of channels into the multi-channel linear prediction network vocoder, and can synthesize the voice signal corresponding to the text to be synthesized; the speech synthesis based on linear prediction can ensure the speech synthesis quality, and meanwhile, the speech synthesis efficiency can be improved by means of the advantage of multiple channels.

In some optional embodiments of the present application, a multi-channel speech synthesis scheme and a single-channel speech synthesis scheme may also be provided at the same time, and an appropriate speech synthesis scheme may be selected according to specific requirements to perform speech synthesis on a text to be synthesized. The multi-channel voice synthesis scheme is a scheme for performing voice synthesis by adopting a multi-channel linear prediction network vocoder; the single-channel speech synthesis scheme refers to a scheme for performing speech synthesis by using a single-channel linear prediction network vocoder.

Specifically, in the embodiment shown in fig. 2c, before step 22c is executed, that is, before feature extraction is performed on the text to be synthesized to obtain acoustic features of the synthesized text on multiple channels, it may also be determined whether a multi-channel speech synthesis scheme needs to be used according to attributes and/or user attributes of the text to be synthesized; if the judgment result is yes, performing feature extraction of a plurality of channels on the text to be synthesized to obtain acoustic features of the text to be synthesized on the plurality of channels, performing voice synthesis on the text to be synthesized by using a multi-channel linear prediction network vocoder, and returning the synthesized voice to the terminal equipment; if the judgment result is negative, performing feature extraction of a single channel on the text to be synthesized to obtain the acoustic features of the text to be synthesized on the single channel; performing voice synthesis on the acoustic characteristics on the single channel by using a single-channel linear prediction network vocoder to obtain synthetic voice corresponding to the text to be synthesized; and returning the voice to be synthesized to the terminal equipment so that the terminal equipment can output the synthesized voice.

The attribute of the text to be synthesized can be the text size and/or the text type; text types include, but are not limited to: word documents, txt text, pdf text, pictures containing text, etc. The user attribute may be a user rating, a location of the user, a group type to which the user belongs, and the like. Alternatively, it may be determined whether a multi-channel speech synthesis scheme needs to be used solely on the basis of the properties of the text to be synthesized. For example, a text size threshold corresponding to the multi-channel speech synthesis scheme may be preset, and the size of the text to be synthesized may be compared with the text size threshold; when the size (such as the number of words, the number of bytes and the like) of the text to be synthesized is larger than a set text size threshold, it is determined that a multi-channel speech synthesis scheme needs to be used, which is beneficial to improving the synthesis efficiency. Optionally, whether a multi-channel speech synthesis scheme needs to be used may be separately determined according to a user attribute corresponding to the text to be synthesized. For example, a user level threshold corresponding to the multi-channel speech synthesis scheme may be preset, and the user level corresponding to the text to be synthesized may be compared with the user level threshold; when the user level corresponding to the text to be synthesized is higher than the user level threshold, the multi-channel speech synthesis scheme is determined to be needed, the synthesis efficiency is improved, and the use experience of the user with higher level is ensured. Of course, it is also possible to determine whether a multi-channel speech synthesis scheme needs to be used in combination with the attributes of the text to be synthesized and the user attributes.

In addition to the above-described selection of a speech synthesis scheme suitable for use in dependence on the properties of the text to be synthesized and/or the properties of the user, information may be provided to the user that provides both a multi-channel speech synthesis scheme and a single-channel speech synthesis scheme, the user deciding which speech synthesis scheme to use in particular. Based on the method, the terminal device can determine which speech synthesis scheme needs to be used according to the attribute of the text to be synthesized and/or the user attribute or the instruction of the user, and report the speech synthesis scheme to the server device after determining the speech synthesis scheme needs to be used for specific speech synthesis. For example: the terminal device can determine that a multi-channel speech synthesis scheme needs to be used according to the information such as the size, the type, the user grade and the like of the text to be synthesized, and send information requesting to use the multi-channel speech synthesis scheme to the server device. Based on the above, the server device may further receive information requesting to use the multi-channel speech synthesis scheme sent by the terminal device, perform feature extraction on the text to be synthesized according to the information requesting to use the multi-channel speech synthesis scheme, obtain acoustic features of the text to be synthesized on multiple channels, perform speech synthesis on the text to be synthesized by using the multi-channel linear prediction network vocoder, and return the synthesized speech to the terminal device, so that the terminal device outputs the synthesized speech. The terminal device may determine which implementation of the speech synthesis scheme needs to be used according to the attribute of the text to be synthesized and/or the user attribute, and the implementation may be the same as or similar to the implementation in which the server device determines whether the multi-channel speech synthesis scheme needs to be used according to the attribute of the text to be synthesized and/or the user attribute, which may be referred to the foregoing implementation, and is not described herein again.

Further, an embodiment of the present application also provides a speech synthesis system, as shown in fig. 3, the speech synthesis system 1000 includes: a terminal device 1200 and a server device 1100 for speech synthesis; the terminal device 1200 is configured to send a speech synthesis request to the server device 1100, receive a synthesized speech corresponding to a text to be synthesized returned by the server device 1100, and output the synthesized speech, where the speech synthesis request includes the text to be synthesized; the server-side equipment 1100 receives the voice synthesis request, and performs feature extraction on the text to be synthesized to obtain acoustic features of the text to be synthesized on a plurality of channels; then, predicting the acoustic characteristics on a plurality of channels by utilizing a neural network combined with linear prediction coding to obtain linear prediction parameters and nonlinear residual errors on the plurality of channels; further, speech synthesis is performed according to the linear prediction parameters and the nonlinear residual errors on the multiple channels to obtain synthesized speech corresponding to the text to be synthesized, and the synthesized speech is returned to the terminal device 1200, so that the terminal device 1200 can play the synthesized speech to the outside.

For a detailed process of performing speech synthesis on acoustic features of multiple channels by using a multi-channel linear prediction network vocoder, reference may be made to the above-mentioned embodiments, which are not described herein again.

In the embodiment of the present application, implementation forms of the terminal device 1200 and the server device 1100 are not limited, where the terminal device 1200 may be a web server, a mobile phone, a tablet computer, a notebook computer, and other terminal devices; the server device 1100 may be a service system with front and back ends, may be a single server device, or may be a server array or a cloud server, and the like, which is not limited herein.

For example, in the process that a user uses an instant messaging application (e.g., a nail) on a terminal device (e.g., a computer or a smart phone) to chat, if part of characters in a chat window need to be converted into a voice signal, text content in the chat window which needs to be converted into the voice signal can be selected; invoking a text-to-speech function through a triggering mode (such as clicking a text-to-speech control) supported by the instant messaging application; at the moment, the instant messaging application can send the selected text content to a server of the instant messaging application; the server side synthesizes the text content into a voice signal by using a multi-channel linear prediction network vocoder and returns the voice signal to the instant messaging application; the instant messaging application outputs the voice signal through a speaker.

For another example, a reading type APP is installed on a terminal device (e.g., a computer or a smart phone) used by a user, the reading type APP supports automatic voice playing, the user can start the automatic voice playing function of the APP, change watching into listening, and can listen to favorite articles or novels at will. After the voice playing function of the APP is started, a user can add contents such as articles or novels which the user wants to read into a reading list and click to start reading; at this time, the APP receives a reading starting instruction sent by the user, and the contents of articles, novels and the like in the reading list can be uploaded to the server side according to the reading starting instruction; a voice synthesis system is operated at a service end, a multi-channel linear prediction network vocoder provided by the embodiment of the application is embedded in the voice synthesis system, and the service end can convert contents such as articles or novels and the like into voice signals by using the voice synthesis system and return the voice signals to a reading APP on terminal equipment; the reading APP calls a loudspeaker of the terminal device to play a voice signal corresponding to an article or a novel, so that the mode of reading the article or the novel by a user is changed from watching to listening, and the reading APP is suitable for the scene that the user is inconvenient to watch the article or the novel.

In some optional embodiments of the present application, the terminal device may further determine that a multi-channel speech synthesis scheme needs to be used according to the user attribute, the attribute of the text to be synthesized, or the instruction of the user, and send information requesting to use the multi-channel speech synthesis scheme to the server device; correspondingly, the server side equipment uses the information of the multi-channel voice synthesis scheme according to the request sent by the terminal equipment, performs feature extraction on the text to be synthesized to obtain the acoustic features of the text to be synthesized on a plurality of channels, performs voice synthesis on the text to be synthesized by using the multi-channel linear prediction network vocoder, and returns the synthesized voice to the terminal equipment so that the terminal equipment can output the synthesized voice. Or the terminal equipment determines that a single-channel voice synthesis scheme needs to be used according to the user attribute, the attribute of the text to be synthesized or the instruction of the user, and sends information requesting to use the single-channel voice synthesis scheme to the server equipment; correspondingly, the server side equipment uses the information of a single-channel voice synthesis scheme according to the request sent by the terminal equipment to extract the characteristics of the text to be synthesized, so as to obtain the acoustic characteristics of the text to be synthesized on the single channel; performing voice synthesis on the acoustic characteristics on a single channel by using a single-channel linear prediction network vocoder to obtain synthetic voice corresponding to a text to be synthesized; and returning the voice to be synthesized to the terminal equipment so that the terminal equipment can output the synthesized voice.

In some optional embodiments, the server device may further determine whether a multi-channel speech synthesis scheme needs to be used according to the attribute of the text to be synthesized and/or the user attribute, if the determination result is yes, perform multi-channel feature extraction on the text to be synthesized to obtain acoustic features of the text to be synthesized on multiple channels, perform speech synthesis on the text to be synthesized by using a multi-channel linear prediction network vocoder, and return the synthesized speech to the terminal device for the terminal device to output the synthesized speech; if the judgment result is negative, performing single-channel feature extraction on the text to be synthesized to obtain the acoustic features of the text to be synthesized on a single channel; performing voice synthesis on the acoustic characteristics on a single channel by using a single-channel linear prediction network vocoder to obtain synthetic voice corresponding to a text to be synthesized; and returning the voice to be synthesized to the terminal equipment so that the terminal equipment can output the synthesized voice.

In the embodiment of the application, the terminal equipment is matched with the server-side equipment to obtain the acoustic characteristics on multiple channels, and the acoustic characteristics of the text to be synthesized on the multiple channels are obtained by utilizing a multi-channel linear prediction network vocoder supporting multi-channel input to synthesize the voice signal corresponding to the text to be synthesized; the speech synthesis based on linear prediction can ensure the speech synthesis quality, and meanwhile, the speech synthesis efficiency can be improved by means of the advantage of multiple channels.

It should be noted that the execution subjects of the steps of the methods provided in the above embodiments may be the same device, or different devices may be used as the execution subjects of the methods. For example, the execution subjects of steps 21d to 25d may be device a; for another example, the execution subject of steps 21d to 22d may be device a, and the execution subject of steps 23d to 25d may be device B; and so on.

In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations are included in a specific order, but it should be clearly understood that the operations may be executed out of the order presented herein or in parallel, and the sequence numbers of the operations, such as 21a, 22a, etc., are merely used for distinguishing various operations, and the sequence numbers themselves do not represent any execution order. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.

Fig. 4 is a schematic structural diagram of a speech synthesis apparatus according to an exemplary embodiment of the present application. The speech synthesis apparatus includes a multi-channel linear prediction network vocoder. As shown in fig. 4, the speech synthesis apparatus includes: a memory 402 and a processor 401.

A memory 402 for storing computer programs and may be configured to store other various data to support operations on the speech synthesis apparatus. Examples of such data include instructions for any application or method operating on the speech synthesis device, contact data, phonebook data, messages, pictures, videos, and the like.

The processor 401 is coupled to the memory 402 for executing the computer program stored in the memory 402 for: acquiring acoustic characteristics of a text to be synthesized on a plurality of channels, wherein different channels correspond to different acoustic frequency bands; predicting acoustic features on a plurality of channels by utilizing a neural network combined with linear prediction coding to obtain linear prediction parameters and nonlinear residual errors on the plurality of channels; and performing voice synthesis according to the linear prediction parameters and the nonlinear residual errors of the multiple channels to obtain synthetic voice corresponding to the text to be synthesized.

In an optional embodiment, when acquiring the acoustic features of the text to be synthesized on the multiple channels, the processor 401 is specifically configured to: receiving acoustic characteristics of a text to be synthesized on a plurality of channels, which are sent by a front-end module in a speech synthesis system; the acoustic features of the text to be synthesized on the channels are obtained by extracting the features of the text to be synthesized by the front-end module.

In an optional embodiment, when acquiring the acoustic features of the text to be synthesized on the multiple channels, the processor 401 is specifically configured to: acquiring initial voice corresponding to a text to be synthesized; performing sub-band analysis on the initial voice by using filters corresponding to a plurality of channels to obtain voice signals on multiple channels; and respectively extracting the characteristics of the voice signals on the multiple channels to obtain the acoustic characteristics on the multiple channels.

In an optional embodiment, when predicting the acoustic features on the multiple channels by using the neural network combined with the linear prediction coding to obtain the linear prediction parameters and the non-linear residuals on the multiple channels, the processor 401 is specifically configured to: inputting the acoustic features on a plurality of channels into a multi-channel linear prediction network vocoder, wherein the multi-channel linear prediction network vocoder comprises a neural network combined with linear prediction coding; and predicting the acoustic characteristics on the channels by using a multi-channel linear prediction network vocoder to obtain linear prediction parameters and nonlinear residuals on the channels.

In an alternative embodiment, when the processor 401 predicts the acoustic features of the multiple channels by using a multi-channel linear prediction network vocoder to obtain linear prediction parameters and non-linear residuals of the multiple channels, the processor is specifically configured to: performing feature conversion on acoustic features on a plurality of channels by taking a frame as a unit to obtain a condition vector by utilizing a frame rate network in a multi-channel linear prediction network vocoder; respectively carrying out linear prediction coding on the acoustic features on the channels to obtain linear prediction parameters on the channels; and predicting the nonlinear residuals on the multiple channels by using a sampling rate network in the multi-channel linear prediction network vocoder based on the condition vectors and the linear prediction parameters on the multiple channels.

In an optional embodiment, when performing speech synthesis according to the linear prediction parameters and the non-linear residuals on the multiple channels to obtain a synthesized speech corresponding to the text to be synthesized, the processor 401 is specifically configured to: in a multi-channel linear prediction network vocoder, for each channel, performing voice synthesis according to linear prediction parameters and nonlinear residual errors on the channel to obtain synthesized voice on the channel; and overlapping the synthesized voices on the channels to obtain the synthesized voice corresponding to the text to be synthesized.

In an optional embodiment, when the processor 401 superimposes the synthesized voices on the multiple channels to obtain the synthesized voice corresponding to the text to be synthesized, the processor is specifically configured to: up-sampling the synthesized voices on the multiple channels to obtain the synthesized voices with specified sampling rates on the multiple channels; and overlapping the synthesized voices with the specified sampling rate on the channels to obtain the synthesized voice corresponding to the text to be synthesized.

In an embodiment of the present application, the acoustic features on each channel comprise a plurality of sampling features; when the processor 401 performs linear prediction coding on the acoustic features on the multiple channels respectively to obtain linear prediction parameters on the multiple channels, the processor is specifically configured to: and for each channel, performing linear prediction on the current sampling feature on the channel and the synthesized voice corresponding to the previous sampling feature on the channel to obtain a linear prediction parameter corresponding to the current sampling feature on the channel.

In an alternative embodiment, the processor 401, when predicting non-linear residuals over multiple channels using a sample rate network in a multi-channel linear prediction network vocoder based on the condition vector and linear prediction parameters over the multiple channels, is specifically configured to: and inputting the condition vector, the linear prediction parameters corresponding to the current sampling features on the channels, the synthesized voice corresponding to the previous sampling features on the channels and the nonlinear residual errors corresponding to the previous sampling features on the channels output by the sampling rate network into the sampling rate network for nonlinear prediction to obtain the nonlinear residual errors corresponding to the current sampling features on the channels.

In an embodiment of the present application, the sample rate network includes a main sample rate network and a plurality of sub-sample rate networks corresponding to the plurality of channels. Based on this, when the processor 401 inputs the condition vector, the linear prediction parameters corresponding to the current sampling features on the multiple channels, the synthesized speech corresponding to the previous sampling features on the multiple channels, and the non-linear residuals corresponding to the previous sampling features on the multiple channels output by the sampling rate network to the sampling rate network for performing non-linear prediction, so as to obtain the non-linear residuals corresponding to the current sampling features on the multiple channels, the processor is specifically configured to: inputting the condition vector, linear prediction parameters corresponding to current sampling features on a plurality of channels, synthetic speech corresponding to previous sampling features on the plurality of channels and nonlinear residual errors corresponding to previous sampling features on the plurality of channels output by a sampling rate network into a main sampling rate network for vectorization processing to obtain a parameter vector; and respectively inputting the parameter vectors into a plurality of sub-sampling rate networks to carry out residual error classification, thereby obtaining nonlinear residual errors corresponding to the current sampling characteristics on a plurality of channels.

In the embodiment of the present application, the main sampling rate network sequentially includes a connection layer, a gate control loop unit GRUAAnd gate control circulation unit GRUB(ii) a Each sub-sampling rate network comprises a dual full-link layer, a classifier and a sampling layer in sequence.

In an optional embodiment, before performing feature extraction on the text to be synthesized to obtain the acoustic features of the text to be synthesized on the multiple channels, the processor 401 is further configured to: judging whether a multi-channel speech synthesis scheme needs to be used or not according to the attribute of the text to be synthesized and/or the user attribute; and if so, performing feature extraction on the text to be synthesized to obtain the acoustic features of the text to be synthesized on the plurality of channels.

Further, the processor 401 is further configured to: if not, performing feature extraction on the text to be synthesized to obtain acoustic features of the text to be synthesized on a single channel; performing voice synthesis on the acoustic characteristics on a single channel by using a single-channel linear prediction network vocoder to obtain synthetic voice corresponding to a text to be synthesized; and returning the voice to be synthesized to the terminal equipment so that the terminal equipment can output the synthesized voice.

In an alternative embodiment, the processor 401 is further configured to: and receiving information which is sent by the terminal equipment and requests to use the multi-channel speech synthesis scheme. Further, when the processor 401 performs feature extraction on the text to be synthesized to obtain acoustic features of the text to be synthesized on multiple channels, the method specifically includes: and according to the information of the multi-channel speech synthesis scheme, performing feature extraction on the text to be synthesized to obtain the acoustic features of the text to be synthesized on a plurality of channels.

Further, as shown in fig. 4, the speech synthesis apparatus further includes: communication components 403, display 407, power components 408, audio components 409, and other components. Only some of the components are schematically shown in fig. 4, and the computing device is not meant to include only the components shown in fig. 4. In addition, the components within the dashed box in FIG. 4 are optional components, not required components, and may depend on the product form of the computing device. The computing device of this embodiment may be implemented as a terminal device such as a desktop computer, a notebook computer, a smart phone, or an IOT device, or may be a server device such as a conventional server, a cloud server, or a server array. If the computing device of this embodiment is implemented as a terminal device such as a desktop computer, a notebook computer, a smart phone, etc., the computing device may include components within a dashed line frame in fig. 4; if the computing device of this embodiment is implemented as a server device such as a conventional server, a cloud server, or a server array, the components in the dashed box in fig. 4 may not be included.

Accordingly, the present application further provides a computer-readable storage medium storing a computer program, where the computer program can implement the steps in the method embodiments shown in fig. 2a to fig. 2d when executed.

The embodiment of the present application further provides a speech synthesis device, which has the same or similar structure as the speech synthesis device shown in fig. 4, and the internal structure thereof can refer to the embodiment shown in fig. 4. The speech synthesis apparatus of the present embodiment differs from the speech synthesis apparatus shown in fig. 4 in that: the functions performed by the processor 401 in executing the computer program stored in the memory are different. In the speech synthesis apparatus of the present embodiment, the processor 401 executes the computer program in the memory 402 to: receiving a voice synthesis request sent by terminal equipment, wherein the voice synthesis request comprises a text to be synthesized; performing feature extraction on the text to be synthesized to obtain acoustic features of the text to be synthesized on a plurality of channels; predicting acoustic features on a plurality of channels by utilizing a neural network combined with linear prediction coding to obtain linear prediction parameters and nonlinear residual errors on the plurality of channels; performing voice synthesis according to the linear prediction parameters and the nonlinear residual errors of the multiple channels to obtain synthetic voice corresponding to the text to be synthesized; and returning the voice to be synthesized to the terminal equipment so that the terminal equipment can output the synthesized voice. For a detailed process of performing speech synthesis on acoustic features of multiple channels by using a multi-channel linear prediction network vocoder, reference may be made to the above-mentioned embodiments, which are not described herein again.

Accordingly, the present application further provides a computer-readable storage medium storing a computer program, where the computer program can implement the steps in the method embodiment shown in fig. 2c when executed.

The communication component of fig. 4 described above is configured to facilitate communication between the device in which the communication component is located and other devices in a wired or wireless manner. The device where the communication component is located can access a wireless network based on a communication standard, such as a WiFi, a 2G, 3G, 4G/LTE, 5G and other mobile communication networks, or a combination thereof. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component may further include a Near Field Communication (NFC) module, Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and the like.

The display in fig. 4 described above includes a screen, which may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.

The power supply assembly of fig. 4 described above provides power to the various components of the device in which the power supply assembly is located. The power components may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device in which the power component is located.

The audio component of fig. 4 described above may be configured to output and/or input an audio signal. For example, the audio component includes a Microphone (MIC) configured to receive an external audio signal when the device in which the audio component is located is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in a memory or transmitted via a communication component. In some embodiments, the audio assembly further comprises a speaker for outputting audio signals.

As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.

The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.

These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.

The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.

Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.

It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.

The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:基于人工智能的音频生成方法、装置、设备及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!