Method and device for training and recognizing intention of intention recognition model

文档序号:7700 发布日期:2021-09-17 浏览:34次 中文

1. A method of training an intent recognition model, comprising:

acquiring training data of a first labeling intention containing a plurality of training texts and a plurality of training texts;

constructing a neural network model comprising a feature extraction layer and a first recognition layer, wherein the first recognition layer is used for outputting a first intention result of a training text and a score between each participle in the training text and a candidate intention according to a semantic vector of the candidate intention and a first semantic vector of each participle in the training text output by the feature extraction layer;

and training the neural network model according to the word segmentation results of the training texts and the first labeling intents of the training texts to obtain an intention recognition model.

2. The method of claim 1, wherein the feature extraction layer outputting a first semantic vector for each participle in a training text comprises:

aiming at each training text, obtaining a word vector of each word segmentation in the training text;

respectively obtaining a coding result and an attention calculation result of each participle according to the word vector of each participle;

and decoding a splicing result between the coding result of each participle and the attention calculation result, and taking the decoding result as a first semantic vector of each participle.

3. The method of claim 1, wherein the first recognition layer outputting the first intention result of the training text and the score between each participle in the training text and the candidate intention according to the semantic vector of the candidate intention and the first semantic vector of each participle in the training text output by the feature extraction layer comprises:

aiming at each training text, obtaining a second semantic vector of each participle and a score between each participle and a candidate intention according to a first semantic vector of each participle and a semantic vector of the candidate intention in the training text;

and classifying according to the second semantic vector of each participle, and taking the classification result as a first intention result of the training text.

4. The method of claim 1, wherein the training the neural network model according to the word segmentation results of the plurality of training texts and the first labeled intents of the plurality of training texts to obtain an intention recognition model comprises:

respectively inputting word segmentation results of a plurality of training texts into the neural network model to obtain a first intention result output by the neural network model aiming at each training text;

calculating a loss function value according to the first intention results of the training texts and the first labeling intents of the training texts;

and adjusting parameters of the neural network model and semantic vectors of the candidate intentions according to the calculated damage function values until the neural network model converges to obtain the intention identification model.

5. The method of claim 1, wherein the obtaining training data for a first annotation intent of a plurality of training texts and a plurality of training texts comprises:

training data comprising a plurality of training texts, a first labeling intention of the training texts and a second labeling intention of the training texts are obtained.

6. The method of claim 5, wherein the constructing a neural network model including a feature extraction layer and a first recognition layer comprises:

and constructing a neural network model comprising a feature extraction layer, a first recognition layer and a second recognition layer, wherein the second recognition layer is used for outputting a second intention result of the training text according to the first semantic vector of each word segmentation in the training text output by the feature extraction layer.

7. The method of claim 6, wherein the training the neural network model according to the word segmentation results of the plurality of training texts and the first labeled intents of the plurality of training texts to obtain an intention recognition model comprises:

respectively inputting word segmentation results of a plurality of training texts into the neural network model to obtain a first intention result and a second intention result output by the neural network model aiming at each training text;

calculating a first loss function value according to the first intention results of the training texts and the first labeling intents of the training texts, and calculating a second loss function value according to the second intention results of the training texts and the second labeling intents of the training texts;

and adjusting parameters of the neural network model and semantic vectors of the candidate intents according to the first damage function value and the second loss function value obtained through calculation until the neural network model converges to obtain the intention identification model.

8. A method of intent recognition, comprising:

acquiring a text to be identified;

inputting the word segmentation result of the text to be recognized into an intention recognition model, and obtaining a first intention result and a second intention result of the text to be recognized according to the output result of the intention recognition model;

wherein the intention recognition model is pre-trained according to the method of any one of claims 1-7.

9. The method of claim 8, wherein the deriving a second intention result of the text to be recognized according to the output result of the intention recognition model comprises:

and obtaining a second intention result of the text to be recognized according to the scores between the participles and the candidate intentions in the text to be recognized output by the intention recognition model.

10. A training apparatus of an intention recognition model, comprising:

the first acquisition unit is used for acquiring training data of a first annotation intention containing a plurality of training texts and a plurality of training texts;

the device comprises a construction unit, a first recognition layer and a second recognition layer, wherein the construction unit is used for constructing a neural network model comprising a feature extraction layer and the first recognition layer, and the first recognition layer is used for outputting a first intention result of a training text and a score between each participle in the training text and a candidate intention according to a semantic vector of the candidate intention and a first semantic vector of each participle in the training text output by the feature extraction layer;

and the training unit is used for training the neural network model according to the word segmentation results of the training texts and the first labeling intents of the training texts to obtain an intention recognition model.

11. The apparatus according to claim 10, wherein the feature extraction layer constructed by the construction unit specifically performs, when outputting the first semantic vector of each participle in the training text:

aiming at each training text, obtaining a word vector of each word segmentation in the training text;

respectively obtaining a coding result and an attention calculation result of each participle according to the word vector of each participle;

and decoding a splicing result between the coding result of each participle and the attention calculation result, and taking the decoding result as a first semantic vector of each participle.

12. The apparatus according to claim 10, wherein the first recognition layer constructed by the construction unit specifically performs, when outputting the first intention result of the training text and the score between each participle in the training text and the candidate intention according to the semantic vector of the candidate intention and the first semantic vector of each participle in the training text output by the feature extraction layer:

aiming at each training text, obtaining a second semantic vector of each participle and a score between each participle and a candidate intention according to a first semantic vector of each participle and a semantic vector of the candidate intention in the training text;

and classifying according to the second semantic vector of each participle, and taking the classification result as a first intention result of the training text.

13. The apparatus according to claim 10, wherein the training unit, when training the neural network model according to the word segmentation results of the plurality of training texts and the first labeled intentions of the plurality of training texts to obtain the intention recognition model, specifically performs:

respectively inputting word segmentation results of a plurality of training texts into the neural network model to obtain a first intention result output by the neural network model aiming at each training text;

calculating a loss function value according to the first intention results of the training texts and the first labeling intents of the training texts;

and adjusting parameters of the neural network model and semantic vectors of the candidate intentions according to the calculated damage function values until the neural network model converges to obtain the intention identification model.

14. The apparatus according to claim 10, wherein the first obtaining unit, when obtaining the plurality of training texts and the training data of the first annotation intention of the plurality of training texts, specifically performs:

training data comprising a plurality of training texts, a first labeling intention of the training texts and a second labeling intention of the training texts are obtained.

15. The apparatus according to claim 14, wherein the constructing unit, when constructing the neural network model including the feature extraction layer and the first recognition layer, specifically performs:

and constructing a neural network model comprising a feature extraction layer, a first recognition layer and a second recognition layer, wherein the second recognition layer is used for outputting a second intention result of the training text according to the first semantic vector of each word segmentation in the training text output by the feature extraction layer.

16. The apparatus according to claim 15, wherein the training unit, when training the neural network model according to the word segmentation results of the plurality of training texts and the first labeled intentions of the plurality of training texts to obtain the intention recognition model, specifically performs:

respectively inputting word segmentation results of a plurality of training texts into the neural network model to obtain a first intention result and a second intention result output by the neural network model aiming at each training text;

calculating a first loss function value according to the first intention results of the training texts and the first labeling intents of the training texts, and calculating a second loss function value according to the second intention results of the training texts and the second labeling intents of the training texts;

and adjusting parameters of the neural network model and semantic vectors of the candidate intents according to the first damage function value and the second loss function value obtained through calculation until the neural network model converges to obtain the intention identification model.

17. An apparatus for intent recognition, comprising:

the second acquisition unit is used for acquiring the text to be recognized;

the recognition unit is used for inputting the word segmentation result of the text to be recognized into an intention recognition model and obtaining a first intention result and a second intention result of the text to be recognized according to the output result of the intention recognition model;

wherein the intention recognition model is pre-trained according to the apparatus of any one of claims 10-16.

18. The apparatus according to claim 17, wherein the recognition unit, when obtaining a second intention result of the text to be recognized according to the output result of the intention recognition model, specifically performs:

and obtaining a second intention result of the text to be recognized according to the scores between the participles and the candidate intentions in the text to be recognized output by the intention recognition model.

19. An electronic device, comprising:

at least one processor; and

a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,

the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-9.

20. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-9.

21. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-9.

Background

During human-computer dialogue interaction, the machine needs to understand the intent of the dialogue statement. However, in the prior art, when the intention of the dialogue sentence is recognized, only one of the sentence level intention and the word level intention of the dialogue sentence is recognized, and the sentence level intention and the word level intention of the dialogue sentence cannot be recognized at the same time.

Disclosure of Invention

According to a first aspect of the present disclosure, there is provided a training method of an intention recognition model, including: acquiring training data of a first labeling intention containing a plurality of training texts and a plurality of training texts; constructing a neural network model comprising a feature extraction layer and a first recognition layer, wherein the first recognition layer is used for outputting a first intention result of a training text and a score between each participle in the training text and a candidate intention according to a semantic vector of the candidate intention and a first semantic vector of each participle in the training text output by the feature extraction layer; and training the neural network model according to the word segmentation results of the training texts and the first labeling intents of the training texts to obtain an intention recognition model.

According to a second aspect of the present disclosure, there is provided a method of intent recognition, comprising: acquiring a text to be identified; and inputting the word segmentation result of the text to be recognized into an intention recognition model, and obtaining a first intention result and a second intention result of the text to be recognized according to the output result of the intention recognition model.

According to a third aspect of the present disclosure, there is provided a training apparatus of an intention recognition model, including: the first acquisition unit is used for acquiring training data of a first annotation intention containing a plurality of training texts and a plurality of training texts; the device comprises a construction unit, a first recognition layer and a second recognition layer, wherein the construction unit is used for constructing a neural network model comprising a feature extraction layer and the first recognition layer, and the first recognition layer is used for outputting a first intention result of a training text and a score between each participle in the training text and a candidate intention according to a semantic vector of the candidate intention and a first semantic vector of each participle in the training text output by the feature extraction layer; and the training unit is used for training the neural network model according to the word segmentation results of the training texts and the first labeling intents of the training texts to obtain an intention recognition model.

According to a fourth aspect of the present disclosure, there is provided an apparatus for intent recognition, comprising: the second acquisition unit is used for acquiring the text to be recognized; and the recognition unit is used for inputting the word segmentation result of the text to be recognized into an intention recognition model and obtaining a first intention result and a second intention result of the text to be recognized according to the output result of the intention recognition model.

According to a fifth aspect of the present disclosure, there is provided an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as described above.

According to a sixth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method as described above.

According to a seventh aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method as described above.

According to the technical scheme, the neural network model comprising the feature extraction layer and the first recognition layer is constructed, and the semantic vectors of the candidate intentions are set, so that the trained intention recognition model can recognize the sentence-level intentions of the text and also can recognize the word-level intentions of the text, and the recognition performance of the intention recognition model is improved.

It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.

Drawings

The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:

FIG. 1 is a schematic diagram according to a first embodiment of the present disclosure;

FIG. 2 is a schematic diagram according to a second embodiment of the present disclosure;

FIG. 3 is a schematic diagram according to a third embodiment of the present disclosure;

FIG. 4 is a schematic diagram according to a fourth embodiment of the present disclosure;

FIG. 5 is a schematic diagram according to a fifth embodiment of the present disclosure;

FIG. 6 is a schematic diagram according to a sixth embodiment of the present disclosure;

FIG. 7 is a block diagram of an electronic device for implementing methods of training of an intent recognition model and intent recognition of embodiments of the present disclosure.

Detailed Description

Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.

Fig. 1 is a schematic diagram according to a first embodiment of the present disclosure. As shown in fig. 1, the method for training an intention recognition model in this embodiment may specifically include the following steps:

s101, acquiring training data of a first annotation intention, wherein the training data comprises a plurality of training texts and a plurality of training texts;

s102, constructing a neural network model comprising a feature extraction layer and a first recognition layer, wherein the first recognition layer is used for outputting a first intention result of a training text and a score between each participle in the training text and a candidate intention according to a semantic vector of the candidate intention and a first semantic vector of each participle in the training text output by the feature extraction layer;

s103, training the neural network model according to the word segmentation results of the training texts and the first labeling intentions of the training texts to obtain an intention recognition model.

According to the training method of the intention recognition model, the neural network model comprising the feature extraction layer and the first recognition layer is constructed, and the semantic vector of the candidate intention is set, so that the first recognition layer in the neural network model can output the score between the first intention result of the training text and each participle and the candidate intention in the training text according to the semantic vector of the candidate intention and the output result of the feature extraction layer, and the intention corresponding to each participle in the training text can be obtained according to the score between each participle and the candidate intention in the training text, therefore, the intention recognition model obtained through training can recognize the sentence level intention of the text and can recognize the word level intention of the text, and the recognition performance of the intention recognition model is improved.

In the training data obtained by executing S101, the first labeling intentions of the multiple training texts are labeling results of sentence-level intentions of the multiple training texts; each training text can correspond to one first annotation intention and can also correspond to a plurality of first annotation intents.

For example, if the training text is "open navigation and go high speed", if the word segmentation result of the training text is "open", "navigation", "go" and "high speed", the first labeling intent of the training text may include "NAVI" and "HIGHWAY", and the second labeling intent of the training text may include "NAVI" corresponding to "open", "NAVI" corresponding to "navigation", "HIGHWAY" corresponding to "go", and "HIGHWAY" corresponding to "high speed".

In this embodiment, after the training data including a plurality of training texts and a first annotation intention of the plurality of training texts is obtained in S101, S102 is performed to construct a neural network model including a feature extraction layer and a first recognition layer.

In the embodiment, when the neural network model is constructed in S102, a plurality of candidate intentions and a semantic vector corresponding to each candidate intention may be preset, where the semantic vector of the candidate intention is used to represent the semantics of the candidate intention, and the semantics may be continuously updated along with the training of the neural network model.

Specifically, in the neural network model constructed in S102 in this embodiment, when the feature extraction layer outputs the first semantic vector of each participle in the training text according to the participle result of the input training text, the optional implementation manner that can be adopted is as follows: for each training text, obtaining a word vector of each word segmentation in the training text, for example, performing embedding (embedding) processing on the word segmentation to obtain a word vector of the word segmentation; respectively obtaining an encoding result and an attention calculation result of each participle according to the word vector of each participle, for example, inputting the word vector into a bidirectional long-short term memory network (Bi-Lstm) encoder to obtain an encoding result, and inputting the word vector into a multi-head attention layer to obtain an attention calculation result; and decoding a splicing result between the encoding result and the attention calculation result of each participle, taking the decoding result as a first semantic vector of each participle, and inputting the splicing result into a long-short term memory network (Lstm) decoder to obtain the decoding result.

In this embodiment, when S102 is executed to input the word vector into the multi-attention layer to obtain the attention calculation result, the word vector may be transformed by using three different linear layers to obtain Q (query matrix), K (key matrix), and V (value matrix), respectively, and then the attention calculation result of each participle is obtained according to the obtained Q, K and V.

The present embodiment may obtain the attention calculation result of each word segmentation by using the following formula:

in the formula: c represents the attention calculation result of word segmentation; q represents a query matrix; k represents a key value matrix; v represents a matrix of values; dkIndicating the number of segments.

Specifically, in the neural network model constructed in S102, when the first recognition layer outputs the first intention result of the training text and the score between each participle in the training text and the candidate intention according to the semantic vector of the candidate intention and the first semantic vector of each participle in the training text output by the feature extraction layer, an optional implementation manner that can be adopted is as follows: aiming at each training text, obtaining a second semantic vector of each participle and a score between each participle and a candidate intention according to a first semantic vector of each participle and a semantic vector of the candidate intention in the training text, wherein the score between each participle and the candidate intention can be an attention score between the participle and the candidate intention; and classifying according to the second semantic vector of each participle, taking the classification result as a first intention result of the training text, for example, inputting the second semantic vector of the participle after linear layer transformation into a classifier, obtaining the score of each candidate intention by the classifier, and further selecting the candidate intention of which the score exceeds a preset threshold value as the first intention result of the training text.

When S102 is executed to obtain the second semantic vector of each participle, the embodiment may use a result obtained by subjecting the semantic vector of the candidate intention to linear layer transformation as Q, use results obtained by subjecting the first semantic vector of the participle to two different linear layer transformations as K and V, and further calculate the second semantic vector of each participle according to the obtained Q, K and V.

In this embodiment, after the step S102 of constructing the neural network model including the feature extraction layer and the first recognition layer is performed, the step S103 of training the neural network model according to the word segmentation results of the plurality of training texts and the first labeling intents of the plurality of training texts is performed to obtain an intention recognition model.

The present embodiment executes the intention recognition model obtained by training in S103, and can output the sentence-level intention and the word-level intention of the text according to the word segmentation result of the input text.

Specifically, in this embodiment, when S103 is executed to train the neural network model according to the word segmentation results of the multiple training texts and the first labeling intentions of the multiple training texts, and obtain the intention recognition model, an optional implementation manner that can be adopted is as follows: respectively inputting the word segmentation results of the training texts into a neural network model to obtain a first intention result output by the neural network model aiming at each training text; calculating a loss function value according to the first intention results of the plurality of training texts and the first labeling intention of the plurality of training texts; and adjusting parameters of the neural network model and semantic vectors of candidate intents according to the calculated damage function values, and finishing training of the neural network model under the condition of determining convergence of the calculated damage function values to obtain an intention recognition model.

That is to say, in the process of training the neural network model, the semantic vector of the candidate intention is also continuously adjusted in the embodiment, so that the semantic vector of the candidate intention can more accurately represent the semantics of the candidate intention, and the accuracy when the first intention result of the training text is obtained according to the semantic vector of the candidate intention and the first semantic vector of each participle in the training text is further improved.

Fig. 2 is a schematic diagram according to a second embodiment of the present disclosure. As shown in fig. 2, the method for training an intention recognition model in this embodiment may specifically include the following steps:

s201, acquiring training data comprising a plurality of training texts, a first labeling intention of the training texts and a second labeling intention of the training texts;

s202, constructing a neural network model comprising a feature extraction layer, a first recognition layer and a second recognition layer, wherein the second recognition layer is used for outputting a second intention result of a training text according to a first semantic vector of each word segmentation in the training text output by the feature extraction layer;

s203, training the neural network model according to the word segmentation results of the training texts, the first labeling intentions of the training texts and the second labeling intentions of the training texts to obtain an intention recognition model.

That is to say, the training data obtained in this embodiment further includes a second labeling intention of the training text, and a neural network model including a second recognition layer is correspondingly constructed, so that an intention recognition model is obtained through training the training text including the first labeling intention and the second labeling intention.

In the training data obtained by executing S201, the second labeling intentions of the multiple training texts are word level intentions of the multiple training texts; one word segmentation in each training text corresponds to one second annotation intention.

In the neural network model constructed in S202, when the second recognition layer outputs the second intention result of the training text according to the first semantic vector of each participle in the training text output by the feature extraction layer, an optional implementation manner that can be adopted is as follows: and for each training text, classifying according to the first semantic vector of each participle in the training text, taking the classification result of each participle as a second intention result of the training text, for example, inputting the first semantic vector of each participle into a classifier after linear layer transformation, obtaining the score of each candidate intention by the classifier, and further selecting the candidate intention with the score exceeding a preset threshold value as the second intention result corresponding to the participle.

In this embodiment, when S203 is executed to train the neural network model according to the word segmentation results of the plurality of training texts, the first labeling intentions of the plurality of training texts, and the second labeling intentions of the plurality of training texts, to obtain the intention recognition model, an optional implementation manner that may be adopted is as follows: respectively inputting the word segmentation results of the training texts into a neural network model to obtain a first intention result and a second intention result output by the neural network model aiming at each training text; calculating a first loss function value according to the first intention results of the training texts and the first labeling intents of the training texts, and calculating a second loss function value according to the second intention results of the training texts and the second labeling intents of the training texts; and adjusting parameters of the neural network model and semantic vectors of candidate intents according to the first damage function value and the second loss function value obtained through calculation, and finishing training of the neural network model under the condition that the first damage function value and the second loss function value obtained through calculation are determined to be convergent, so that an intention recognition model is obtained.

Fig. 3 is a schematic diagram according to a third embodiment of the present disclosure. As shown in fig. 3, the method for identifying an intention in this embodiment may specifically include the following steps:

s301, acquiring a text to be recognized;

s302, inputting the word segmentation result of the text to be recognized into an intention recognition model, and obtaining a first intention result and a second intention result of the text to be recognized according to the output result of the intention recognition model.

That is to say, in the embodiment, the intention recognition model obtained by pre-training is used for performing intention recognition on the text to be recognized, and since the intention recognition model can output the sentence-level intention and the word-level intention of the text to be recognized, the types of recognized intentions are enriched, and the accuracy of intention recognition is improved.

Since the intention recognition model used in this embodiment may be obtained by different training manners, if the intention recognition model is obtained by constructing a neural network model including a second recognition layer and training data including a second labeling intention, after the word segmentation result of the text to be recognized is input into the intention recognition model, the intention recognition model may output a first intention result through the first recognition layer, and output a second intention result through the second recognition layer.

If the second intention result is obtained by training the neural network model including the second recognition layer and the training data including the second annotation intention, after the word segmentation result of the text to be recognized is input into the intention recognition model, the intention recognition model outputs the first intention result and the scores between each word segmentation and the candidate intention in the text to be recognized through the first recognition layer, and when S302 is executed to obtain the second intention result according to the output result of the intention recognition model, the optional implementation manner that can be adopted is as follows: for example, in the embodiment, a score matrix may be formed according to the scores between the participles and the candidate intentions, and then a viterbi (viterbi) algorithm is used to search to obtain a second intention result corresponding to each participle.

Fig. 4 is a schematic diagram according to a fourth embodiment of the present disclosure. Fig. 4 shows a flowchart of the present embodiment when performing intent recognition: if the text to be recognized is 'open navigation and high speed walking', if the word segmentation result of the text to be recognized is 'open', 'navigation', 'walking' and 'high speed', if the candidate intentions include 'NAVI', 'HIGHWAY' and 'POI', the semantic vectors of the candidate intentions are l1, l2 and l3 respectively; inputting word segmentation results of a text to be recognized into an intention recognition model, and enabling a feature extraction layer in the intention recognition model to pass word vectors of each word segmentation result through an encoder layer, an attention layer, a connection layer and a decoder layer to obtain a first semantic vector h1 corresponding to 'open', a first semantic vector h2 corresponding to 'navigation', a first semantic vector h3 corresponding to 'go' and a first semantic vector h4 corresponding to 'high speed'; then inputting the first semantic vector of each word segmentation result into a second recognition layer to obtain second intention results which are outputted by the second recognition layer and correspond to each word segmentation result, wherein the second intention results are 'NAVI', 'HIGHWAY' and 'HIGHWAY'; inputting the first semantic vector corresponding to each participle and the semantic vector of each candidate intention into a first recognition layer to obtain a first intention result which is output by the first recognition layer and corresponds to the text to be recognized, wherein the first intention result is NAVI and HIGHWAY; in addition, the first recognition layer also outputs scores between each word segmentation result and the candidate intention in the text to be recognized, for example, a score matrix on the left side in fig. 4.

Fig. 5 is a schematic diagram according to a fifth embodiment of the present disclosure. As shown in fig. 5, the training apparatus 500 for the intention recognition model of the present embodiment includes:

a first obtaining unit 501, configured to obtain training data of a first annotation intention that includes a plurality of training texts and a plurality of training texts;

the building unit 502 is configured to build a neural network model including a feature extraction layer and a first recognition layer, where the first recognition layer is configured to output a first intention result of a training text and a score between each participle in the training text and a candidate intention according to a semantic vector of the candidate intention and a first semantic vector of each participle in the training text output by the feature extraction layer;

the training unit 503 is configured to train the neural network model according to the word segmentation results of the multiple training texts and the first labeling intents of the multiple training texts, so as to obtain an intention recognition model.

In the training data acquired by the first acquiring unit 501, the first labeling intentions of the multiple training texts are labeling results of sentence-level intentions of the multiple training texts; each training text can correspond to one first annotation intention and can also correspond to a plurality of first annotation intents.

When the first obtaining unit 501 obtains the training data, it may further obtain second labeling intentions of the multiple training texts, that is, word level intentions of the multiple training texts; one word segmentation in each training text corresponds to one second annotation intention.

After the first acquisition unit 501 acquires the training data, the construction unit 502 constructs a neural network model including a feature extraction layer and a first recognition layer.

When the neural network model is built, the building unit 502 may also preset a plurality of candidate intentions and semantic vectors corresponding to each candidate intention, where the semantic vectors of the candidate intentions are used to represent semantics of the candidate intentions, and are continuously updated along with training of the neural network model.

Specifically, in the neural network model constructed by the construction unit 502, when the feature extraction layer outputs the first semantic vector of each participle in the training text according to the participle result of the input training text, the optional implementation manner that can be adopted is as follows: aiming at each training text, obtaining a word vector of each word segmentation in the training text; respectively obtaining a coding result and an attention calculation result of each participle according to the word vector of each participle; and decoding a splicing result between the coding result of each participle and the attention calculation result, and taking the decoding result as a first semantic vector of each participle.

When the word vector is input into the multi-attention layer to obtain the attention calculation result, the constructing unit 502 may use three different linear layers to transform the word vector, so as to obtain Q (query matrix), K (key matrix), and V (values matrix), respectively, and further obtain the attention calculation result of each participle according to the obtained Q, K and V.

Specifically, in the neural network model constructed by the construction unit 502, when the first recognition layer outputs the first intention result of the training text and the score between each participle in the training text and the candidate intention according to the semantic vector of the candidate intention and the first semantic vector of each participle in the training text output by the feature extraction layer, the optional implementation manner that can be adopted is as follows: aiming at each training text, obtaining a second semantic vector of each participle and a score between each participle and a candidate intention according to a first semantic vector of each participle and a semantic vector of the candidate intention in the training text, wherein the score between each participle and the candidate intention can be an attention score between the participle and the candidate intention; and classifying according to the second semantic vector of each participle, and taking the classification result as a first intention result of the training text.

When the construction unit 502 obtains the second semantic vector of each participle, a result obtained by subjecting the semantic vector of the candidate intention to linear layer transformation may be used as Q, and results obtained by subjecting the first semantic vector of the participle to two different linear layer transformations may be used as K and V, respectively, so as to obtain the second semantic vector of each participle by calculating according to the obtained Q, K and V.

The building unit 502 may further build a neural network model including a second recognition layer, where when the second recognition layer outputs a second intention result of the training text according to the first semantic vector of each participle in the training text output by the feature extraction layer, an optional implementation manner that may be adopted is: and for each training text, classifying according to the first semantic vector of each word in the training text, and taking the classification result of each word as a second intention result of the training text.

In this embodiment, after the building unit 502 builds the neural network model including the feature extraction layer and the first recognition layer, the training unit 503 trains the neural network model according to the word segmentation results of the plurality of training texts and the first labeled intentions of the plurality of training texts to obtain the intention recognition model.

Specifically, when the training unit 503 trains the neural network model according to the word segmentation results of the plurality of training texts and the first labeling intentions of the plurality of training texts to obtain the intention recognition model, the optional implementation manner that can be adopted is as follows: respectively inputting the word segmentation results of the training texts into a neural network model to obtain a first intention result output by the neural network model aiming at each training text; calculating a loss function value according to the first intention results of the plurality of training texts and the first labeling intention of the plurality of training texts; and adjusting parameters of the neural network model and semantic vectors of candidate intents according to the calculated damage function values, and finishing training of the neural network model under the condition of determining convergence of the calculated damage function values to obtain an intention recognition model.

That is to say, in the process of training the neural network model, the semantic vector of the candidate intention is also continuously adjusted in the embodiment, so that the semantic vector of the candidate intention can more accurately represent the semantics of the candidate intention, and the accuracy when the first intention result of the training text is obtained according to the semantic vector of the candidate intention and the first semantic vector of each participle in the training text is further improved.

When the training unit 503 trains the neural network model according to the word segmentation results of the plurality of training texts, the first labeling intentions of the plurality of training texts, and the second labeling intentions of the plurality of training texts to obtain the intention recognition model, the optional implementation manner that can be adopted is as follows: respectively inputting the word segmentation results of the training texts into a neural network model to obtain a first intention result and a second intention result output by the neural network model aiming at each training text; calculating a first loss function value according to the first intention results of the training texts and the first labeling intents of the training texts, and calculating a second loss function value according to the second intention results of the training texts and the second labeling intents of the training texts; and adjusting parameters of the neural network model and semantic vectors of candidate intents according to the first damage function value and the second loss function value obtained through calculation, and finishing training of the neural network model under the condition that the first damage function value and the second loss function value obtained through calculation are determined to be convergent, so that an intention recognition model is obtained.

Fig. 6 is a schematic diagram according to a sixth embodiment of the present disclosure. As shown in fig. 6, the apparatus 600 for intention recognition of the present embodiment includes:

a second obtaining unit 601, configured to obtain a text to be recognized;

the recognition unit 602 is configured to input a word segmentation result of the text to be recognized into an intention recognition model, and obtain a first intention result and a second intention result of the text to be recognized according to an output result of the intention recognition model.

Since the intention recognition model used in this embodiment may be obtained by different training manners, if the intention recognition model is obtained by constructing a neural network model including a second recognition layer and training data including a second labeling intention, after the word segmentation result of the text to be recognized is input into the intention recognition model, the recognition unit 602 may output a first intention result through the first recognition layer, and output a second intention result through the second recognition layer.

If the result is not obtained by training the neural network model including the second recognition layer and the training data including the second annotation intention, after the word segmentation result of the text to be recognized is input into the intention recognition model, the intention recognition model outputs the first intention result and the scores between each word segmentation and the candidate intention in the text to be recognized through the first recognition layer, and when the recognition unit 602 obtains the second intention result according to the output result of the intention recognition model, the optional implementation manner that can be adopted is as follows: and obtaining a second intention result of the text to be recognized according to the scores between the participles and the candidate intentions in the text to be recognized output by the intention recognition model.

In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, and do not violate the good customs of the public order.

The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.

As shown in fig. 7, is a block diagram of an electronic device of a method for training an intention recognition model and intention recognition according to an embodiment of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.

As shown in fig. 7, the device 700 comprises a computing unit 701, which may perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM)702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM703, various programs and data required for the operation of the device 700 can also be stored. The computing unit 701, the ROM702, and the RAM703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.

Various components in the device 700 are connected to the I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, or the like; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.

Computing unit 701 may be a variety of general purpose and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 701 performs the respective methods and processes described above, such as training of an intention recognition model and an intention recognition method. For example, in some embodiments, the method of training of the intent recognition model and intent recognition may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 708.

In some embodiments, part or all of a computer program may be loaded onto and/or installed onto device 700 via ROM702 and/or communications unit 709. When loaded into RAM703 and executed by the computing unit 701, may perform one or more steps of the above described method of training of an intent recognition model and intent recognition. Alternatively, in other embodiments, the computing unit 701 may be configured by any other suitable means (e.g., by means of firmware) to perform the method of training of the intent recognition model and intent recognition.

Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.

Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.

In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.

To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.

The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.

The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service ("Virtual Private Server", or simply "VPS"). The server may also be a server of a distributed system, or a server incorporating a blockchain.

It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel or sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.

The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:对话方法、装置、设备和存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!