Training method, device, equipment and storage medium of semantic representation model

文档序号:8301 发布日期:2021-09-17 浏览:35次 中文

1. A method of training a semantic representation model, comprising:

obtaining an anchor sample based on a sentence, and obtaining a positive sample and a negative sample based on syntactic information of the sentence;

respectively processing the anchor sample, the positive sample and the negative sample by adopting a semantic representation model to obtain an anchor sample semantic representation, a positive sample semantic representation and a negative sample semantic representation;

constructing a contrast loss function based on the anchor sample semantic representation, the positive sample semantic representation, and the negative sample semantic representation;

training the semantic representation model based on the contrast loss function.

2. The method of claim 1, further comprising:

and performing dependency syntax analysis on the sentence to obtain syntax information of the sentence.

3. The method of claim 1, wherein the obtaining positive and negative examples based on the syntactic information of the sentence comprises:

constructing a syntax tree based on the syntax information of the sentence;

acquiring a first text corresponding to a subtree contained in the syntax tree, and taking the first text as a positive sample;

and acquiring second text based on the words in the subtree, wherein the second text contains the words and is different from the text corresponding to the subtree, and taking the second text as a negative sample.

4. The method of claim 3, wherein the obtaining second text based on terms in the subtree comprises:

selecting, as the second text, text in which words are consecutive and the number of words is the same as the number of words included in the positive sample in the sentence based on the words in the subtree.

5. The method of any of claims 1-4, wherein the obtaining anchor samples based on sentences comprises:

taking the sentence as an anchor sample; alternatively, the first and second electrodes may be,

and taking words in a subtree contained in the syntax tree corresponding to the sentence as anchor samples.

6. A training apparatus for a semantic representation model, comprising:

an obtaining module, configured to obtain an anchor sample based on a sentence, and obtain a positive sample and a negative sample based on syntax information of the sentence;

the encoding module is used for respectively processing the anchor sample, the positive sample and the negative sample by adopting a semantic representation model so as to obtain an anchor sample semantic representation, a positive sample semantic representation and a negative sample semantic representation;

a construction module for constructing a contrast loss function based on the anchor sample semantic representation, the positive sample semantic representation, and the negative sample semantic representation;

and the training module is used for training the semantic representation model based on the contrast loss function.

7. The apparatus of claim 6, further comprising:

and the analysis module is used for carrying out dependency syntax analysis on the sentence so as to obtain the syntax information of the sentence.

8. The apparatus of claim 6, wherein the acquisition module is specifically configured to:

constructing a syntax tree based on the syntax information of the sentence;

acquiring a first text corresponding to a subtree contained in the syntax tree, and taking the first text as a positive sample;

and acquiring second text based on the words in the subtree, wherein the second text contains the words and is different from the text corresponding to the subtree, and taking the second text as a negative sample.

9. The apparatus of claim 8, wherein the obtaining module is further specifically configured to:

selecting, as the second text, text in which words are consecutive and the number of words is the same as the number of words included in the positive sample in the sentence based on the words in the subtree.

10. The apparatus according to any one of claims 6 to 9, wherein the obtaining module is specifically configured to:

taking the sentence as an anchor sample; alternatively, the first and second electrodes may be,

and taking words in a subtree contained in the syntax tree corresponding to the sentence as anchor samples.

11. An electronic device, comprising:

at least one processor; and

a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,

the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-5.

12. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-5.

13. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-5.

Background

In natural language processing, a semantic representation model may be used to convert a sentence into a corresponding semantic representation for subsequent processing. The syntax structure of sentences is different, which causes a great difference in semantics.

In the related technology, the structure of the semantic representation model can be modified, syntactic information is introduced, and the semantic representation model is trained.

Disclosure of Invention

The disclosure provides a training method, a device, equipment and a storage medium of a semantic representation model.

According to an aspect of the present disclosure, there is provided a training method of a semantic representation model, including: obtaining an anchor sample based on a sentence, and obtaining a positive sample and a negative sample based on syntactic information of the sentence; respectively processing the anchor sample, the positive sample and the negative sample by adopting a semantic representation model to obtain an anchor sample semantic representation, a positive sample semantic representation and a negative sample semantic representation; constructing a contrast loss function based on the anchor sample semantic representation, the positive sample semantic representation, and the negative sample semantic representation; training the semantic representation model based on the contrast loss function.

According to another aspect of the present disclosure, there is provided a training apparatus for a semantic representation model, including: an obtaining module, configured to obtain an anchor sample based on a sentence, and obtain a positive sample and a negative sample based on syntax information of the sentence; the encoding module is used for respectively processing the anchor sample, the positive sample and the negative sample by adopting a semantic representation model so as to obtain an anchor sample semantic representation, a positive sample semantic representation and a negative sample semantic representation; a construction module for constructing a contrast loss function based on the anchor sample semantic representation, the positive sample semantic representation, and the negative sample semantic representation; and the training module is used for training the semantic representation model based on the contrast loss function.

According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the above aspects.

According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method according to any one of the above aspects.

According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of the above aspects.

According to the technical scheme of the disclosure, the semantic representation of the sentence can contain syntactic information on the basis of not modifying the model structure.

It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.

Drawings

The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:

FIG. 1 is a schematic diagram according to a first embodiment of the present disclosure;

FIG. 2 is a schematic diagram according to a second embodiment of the present disclosure;

FIG. 3 is a schematic diagram according to a third embodiment of the present disclosure;

FIG. 4 is a schematic diagram according to a fourth embodiment of the present disclosure;

FIG. 5 is a schematic diagram according to a fifth embodiment of the present disclosure;

FIG. 6 is a schematic diagram of an electronic device for implementing any one of the training methods of the semantic representation model of the embodiments of the present disclosure.

Detailed Description

Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.

When the syntax information of the sentence is different, the sentence can correspond to different semantics. For example, "Q1: the milk has the inheritance right of the grandchild, and the 'Q2' is the inheritance right of the milk from the grandchild, and the expression semantics of the two sentences are completely different although the two sentences are similar in literal.

In the related art, the semantic representation can contain syntactic information by modifying the structure of the semantic representation model, but this way of modifying the model structure is not beneficial to the use of downstream tasks and has problems in terms of accuracy.

Fig. 1 is a schematic diagram according to a first embodiment of the present disclosure, which provides a training method of a semantic representation model, including:

101. anchor samples are obtained based on a sentence, and positive and negative samples are obtained based on syntactic information of the sentence.

102. And respectively processing the anchor sample, the positive sample and the negative sample by adopting a semantic representation model to obtain an anchor sample semantic representation, a positive sample semantic representation and a negative sample semantic representation.

103. Constructing a contrast loss function based on the anchor sample semantic representation, the positive sample semantic representation, and the negative sample semantic representation.

104. Training the semantic representation model based on the contrast loss function.

Where sentences may be collected as samples in advance, and then the semantic representation model is trained based on the sentences.

After a sentence is obtained, dependency syntax analysis may be performed on the sentence to obtain syntax information of the sentence.

Dependency parsing is one of natural language processing core technologies, and aims to determine syntax information of a sentence by analyzing dependencies between words in the sentence.

Taking the sentence "Baidu is a high-tech company" as an example, the syntax information obtained after performing dependency syntax analysis on the sentence is shown in fig. 2. The syntactic information may include relationships between words in a sentence, different relationships may be labeled with different symbols, and the relationships between words in fig. 2 are labeled as follows:

HED: core relationships, which refer to the core of the entire sentence;

SBV: a main-predicate relationship, which refers to the relationship between a subject and a predicate;

VOB: the moving guest relationship refers to the relationship between the object and the predicate;

ATT: and the centering relation specifies the relation between the specified words and the central words.

By performing dependency syntax analysis on the sentence, the syntax information of the sentence can be conveniently and quickly acquired.

After obtaining the syntactic information of the sentence, a syntactic tree may be constructed based on the syntactic information, and based on the example shown in fig. 2, the constructed syntactic tree may be as shown in fig. 3.

After obtaining the syntax tree, the text corresponding to the subtree included in the syntax tree may be used as a positive sample, another text is obtained based on the words in the subtree, the another text includes the words, and the another text does not correspond to the subtree in the syntax tree, and the another text is used as a negative sample. For the sake of distinction, the text as the positive sample may be referred to as a first text, and the text as the negative sample may be referred to as a second text.

For example, as shown in fig. 3, three words (token) of "a", "a high technology" and "a company" may form a sub-tree in the syntax tree, and then the text "a high technology company" corresponding to the sub-tree may be used as a positive sample.

The accuracy of the positive and negative samples may be improved by obtaining the positive and negative samples based on the subtrees of the syntax tree.

After the subtree corresponding to the positive exemplar is obtained, the negative exemplar can be obtained based on the words in the subtree.

In order to improve the training effect of the semantic representation model, generally speaking, the positive and negative examples may contain the same number of words. That is, a text in which words are consecutive and the number of words is the same as the number of words included in the positive sample may be selected as the second text in the sentence based on the words in the subtree.

Taking a subtree composed of three words, namely "one word", "high technology" and "company" as an example, the negative examples can be obtained based on the word "high technology" in the subtree, for example, the text "is a high technology" including three words, the three words are consecutive, and the three words include "high technology", so that the text "is a high technology" as the negative examples.

By selecting the text with the same number of words as the positive sample as the negative sample, the effect of the semantic representation model can be improved.

Multiple negative samples may be selected for one positive sample. For example, the positive example "a high-tech company" may also select "Baidu is one" as a negative example based on the word "one" therein.

The anchor sample may be a sentence of the whole sentence, or the anchor sample may also be words corresponding to the positive sample and the negative sample in the subtree, for example, "hundred degrees is a high-tech company" of the whole sentence may be used as the anchor sample, or "high-tech" may be used as the anchor sample.

By selecting the whole sentence or words corresponding to the positive and negative samples, the data of the anchor sample can be expanded, and the effect of the semantic representation model is further improved.

After the anchor sample, the positive sample, and the negative sample are obtained, they may be input into the semantic representation model, respectively, to obtain corresponding semantic representations, respectively.

The semantic Representation model is a model for converting sentences into corresponding vector Representations, and various related pre-training model structures can be adopted, such as transform-based Bidirectional Encoded Representation (BERTs), Optimized BERT (a robust Optimized BERT compressing Approach, RoBERTa), kNowledge Enhanced semantic Representation (Enhanced reconstruction from kNowledge expression, ERNIE), and the like.

As shown in fig. 4, taking an anchor sample as an example of a sentence, the sentence may be input into the semantic representation model, the output representation may be referred to as an anchor sample semantic representation, a positive sample is input into the semantic representation model, the output representation may be referred to as a positive sample semantic representation, a negative sample is input into the semantic representation model, and the output representation may be referred to as a negative sample semantic representation.

Thereafter, a contrast loss function may be constructed based on the three semantic representations.

The contrast loss function is a loss function adopted in contrast learning, and the contrast learning is one of self-supervision learning and aims to zoom in a positive sample and zoom out a negative sample.

One calculation formula for the contrast loss function is represented as:

where L is the contrast loss function, q is the anchor sample, k+Is a positive sample, kiThe method is characterized in that the method is an ith negative sample, the total number of the negative samples is K, theta is a parameter of a semantic representation model, f (, theta) is a corresponding semantic representation obtained after the semantic representation model is processed, tau is a hyper-parameter, and sim () represents similarity calculation among vectors.

After the contrast loss function is obtained, the semantic representation model may be trained by using the contrast loss function, that is, parameters of the semantic representation model are adjusted based on the contrast loss function until a preset end condition is reached, where the end condition is, for example, convergence of the contrast loss function or reaching a preset number of iterations, the model parameters when the preset end condition is reached are taken as final model parameters, and the corresponding semantic representation model is taken as a final semantic representation model, so that the final semantic representation model may be applied to process a sentence to obtain a semantic representation corresponding to the sentence including syntax information.

In this embodiment, the positive sample and the negative sample are obtained based on the syntactic information of the sentence, and the semantic representation model is trained based on the anchor sample, the positive sample and the negative sample, so that the semantic representation of the sentence includes the syntactic information without modifying the model structure.

Further, the method of the embodiment can be applied to a pre-training process, that is, in the pre-training process of the semantic representation model, the contrast loss function is adopted for training without changing the structure of the pre-training model, so that when the pre-training model is applied to a downstream task, the downstream task is not perceived. In addition, when the pre-training model is applied to the downstream task for fine tuning (fining), syntax information does not need to be introduced, and the performance of the downstream task is not influenced. The embodiment can implicitly contain the syntactic information in the semantic representation, and compared with a mode of explicitly using the syntactic information, such as adding a pre-training task of predicting a parent node of each word, the method can avoid the error accumulation of the syntactic and improve the accuracy of the semantic representation model.

Fig. 5 is a schematic diagram according to a fifth embodiment of the present disclosure, which provides a training apparatus for semantic representation model. As shown in fig. 5, the apparatus 500 includes: an acquisition module 501, an encoding module 502, a construction module 503, and a training module 504.

The obtaining module 501 is configured to obtain an anchor sample based on a sentence, and obtain a positive sample and a negative sample based on syntax information of the sentence; the encoding module 502 is configured to respectively process the anchor sample, the positive sample, and the negative sample by using a semantic representation model to obtain an anchor sample semantic representation, a positive sample semantic representation, and a negative sample semantic representation; the construction module 503 is configured to construct a contrast loss function based on the anchor sample semantic representation, the positive sample semantic representation, and the negative sample semantic representation; the training module 504 is configured to train the semantic representation model based on the contrast loss function.

In some embodiments, the apparatus 500 further comprises: and the analysis module is used for carrying out dependency syntax analysis on the sentence so as to obtain the syntax information of the sentence.

In some embodiments, the obtaining module 501 is specifically configured to: constructing a syntax tree based on the syntax information of the sentence; acquiring a first text corresponding to a subtree contained in the syntax tree, and taking the first text as a positive sample; and acquiring second text based on the words in the subtree, wherein the second text contains the words and is different from the text corresponding to the subtree, and taking the second text as a negative sample.

In some embodiments, the obtaining module 501 is further specifically configured to: selecting, as the second text, text in which words are consecutive and the number of words is the same as the number of words included in the positive sample in the sentence based on the words in the subtree.

In some embodiments, the obtaining module 501 is specifically configured to: taking the sentence as an anchor sample; or, taking the words in the subtree contained in the syntax tree corresponding to the sentence as anchor samples.

In this embodiment, the positive sample and the negative sample are obtained based on the syntactic information of the sentence, and the semantic representation model is trained based on the anchor sample, the positive sample and the negative sample, so that the semantic representation of the sentence includes the syntactic information without modifying the model structure.

It is to be understood that in the disclosed embodiments, the same or similar elements in different embodiments may be referenced.

It is to be understood that "first", "second", and the like in the embodiments of the present disclosure are used for distinction only, and do not indicate the degree of importance, the order of timing, and the like.

The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.

FIG. 6 illustrates a schematic block diagram of an example electronic device 600 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.

As shown in fig. 6, the electronic apparatus 600 includes a computing unit 601, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)602 or a computer program loaded from a storage unit 606 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 can also be stored. The calculation unit 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.

Various components in the electronic device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, a mouse, or the like; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the electronic device 600 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.

The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 601 performs the various methods and processes described above, such as a training method of a semantic representation model. For example, in some embodiments, the training method of the semantic representation model may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into RAM 603 and executed by the computing unit 601, one or more steps of the training method of the semantic representation model described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured by any other suitable means (e.g., by means of firmware) to perform the training method of the semantic representation model.

Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.

Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.

In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.

To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.

The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.

The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service ("Virtual Private Server", or simply "VPS"). The server may also be a server of a distributed system, or a server incorporating a blockchain.

It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.

The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:模型训练方法、品牌词识别方法、装置及电子设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!