Method, device, equipment and storage medium for training abstract generation model

文档序号:8274 发布日期:2021-09-17 浏览:54次 中文

1. A method for training a summary generation model comprises the following steps:

acquiring a document representation corresponding to the document sample;

constructing a summary representation corresponding to the document representation based on the document representation, wherein the summary representation comprises a positive summary representation and a negative summary representation;

and constructing a total contrast loss function based on the document representation, the positive abstract representation and the negative abstract representation, and training an abstract generating model based on the total contrast loss function.

2. The method of claim 1, wherein the summary generation model comprises: an encoder and a decoder, the obtaining a document representation corresponding to a document sample, comprising:

processing the document sample with the encoder to obtain an encoded representation;

processing the encoded representation with the decoder to obtain a decoded representation;

and using the encoded representation and/or the decoded representation as the document representation.

3. The method of claim 2, wherein the document representation includes the encoded representation and the decoded representation, and wherein constructing an overall contrast loss function based on the document representation, the positive digest representation, and the negative digest representation comprises:

constructing a first contrast loss function based on the encoded representation, the positive digest representation, and the negative digest representation;

constructing a second contrast loss function based on the decoded representation, the positive digest representation, and the negative digest representation;

constructing an overall contrast loss function based on the first contrast loss function and the second contrast loss function.

4. The method of claim 2, wherein the document representation includes the decoded representation, and wherein constructing a corresponding digest representation of the document representation based on the document representation comprises:

acquiring a generated text corresponding to the decoding representation;

constructing a positive abstract sample and a negative abstract sample based on the generated text;

and acquiring a positive abstract representation corresponding to the positive abstract sample and a negative abstract representation corresponding to the negative abstract sample.

5. The method of claim 4, wherein said constructing a positive abstract sample based on said generated text comprises:

and performing loop translation on the generated text to obtain a loop translation result, and taking the loop translation result as the positive abstract sample.

6. The method of claim 4, wherein the constructing negative abstract samples based on the generated text comprises at least one of:

carrying out entity replacement on the generated text to obtain an entity replacement result, and taking the entity replacement result as the negative abstract sample;

carrying out pronoun replacement on the generated text to obtain a pronoun replacement result, and taking the pronoun replacement result as the negative abstract sample;

performing emotion replacement on the generated text to obtain an emotion replacement result, and taking the emotion replacement result as the negative abstract sample;

acquiring a similar text of the generated text, and taking the similar text as the negative abstract sample;

and performing virtual countermeasure training on the generated text to obtain a virtual countermeasure result, and taking the virtual countermeasure result as the negative abstract sample.

7. A training apparatus for a digest generation model, comprising:

the acquisition module is used for acquiring the document representation corresponding to the document sample;

a construction module for constructing a summary representation corresponding to the document representation based on the document representation, the summary representation comprising a positive summary representation and a negative summary representation;

and the training module is used for constructing a total contrast loss function based on the document representation, the positive abstract representation and the negative abstract representation and training an abstract generation model based on the total contrast loss function.

8. The apparatus of claim 7, wherein the summary generation model comprises: an encoder and a decoder, the acquisition module being specifically configured to:

processing the document sample with the encoder to obtain an encoded representation;

processing the encoded representation with the decoder to obtain a decoded representation;

and using the encoded representation and/or the decoded representation as the document representation.

9. The apparatus of claim 8, wherein the document representation comprises the encoded representation and the decoded representation, the training module being specifically configured to:

constructing a first contrast loss function based on the encoded representation, the positive digest representation, and the negative digest representation;

constructing a second contrast loss function based on the decoded representation, the positive digest representation, and the negative digest representation;

constructing an overall contrast loss function based on the first contrast loss function and the second contrast loss function.

10. The apparatus of claim 8, wherein the document representation comprises the decoded representation, the construction module being specifically configured to:

acquiring a generated text corresponding to the decoding representation;

constructing a positive abstract sample and a negative abstract sample based on the generated text;

and acquiring a positive abstract representation corresponding to the positive abstract sample and a negative abstract representation corresponding to the negative abstract sample.

11. The apparatus of claim 10, wherein the construction module is further specific to:

and performing loop translation on the generated text to obtain a loop translation result, and taking the loop translation result as the positive abstract sample.

12. The apparatus of claim 10, wherein the means for constructing is further specific to perform at least one of:

carrying out entity replacement on the generated text to obtain an entity replacement result, and taking the entity replacement result as the negative abstract sample;

carrying out pronoun replacement on the generated text to obtain a pronoun replacement result, and taking the pronoun replacement result as the negative abstract sample;

performing emotion replacement on the generated text to obtain an emotion replacement result, and taking the emotion replacement result as the negative abstract sample;

acquiring a similar text of the generated text, and taking the similar text as the negative abstract sample;

and performing virtual countermeasure training on the generated text to obtain a virtual countermeasure result, and taking the virtual countermeasure result as the negative abstract sample.

13. An electronic device, comprising:

at least one processor; and

a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,

the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.

14. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-6.

15. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-6.

Background

Automatic summarization is intended to automatically generate a concise summary for one or more documents, and requires that the generated summary be semantically coherent, linguistically fluent, and faithful to the contents of the original document. The automatic summarization is divided into a decimated summary and a generative summary, wherein the generative summary aims to understand an input document and organize a language to generate a target summary in a manner that mimics a human summarized article through big data technology. The process of generating the abstract may include processing the input document by using an abstract generation model to obtain a corresponding abstract of the input document.

In the related art, the abstract generation model is trained by using a maximum likelihood probability function as a loss function during training.

Disclosure of Invention

The disclosure provides a method, a device, equipment and a storage medium for training a summary generation model.

According to an aspect of the present disclosure, there is provided a method for training a summary generation model, including: acquiring a document representation corresponding to the document sample; constructing a summary representation corresponding to the document representation based on the document representation, wherein the summary representation comprises a positive summary representation and a negative summary representation; and constructing a total contrast loss function based on the document representation, the positive abstract representation and the negative abstract representation, and training an abstract generating model based on the total contrast loss function.

According to another aspect of the present disclosure, there is provided a training apparatus for a summary generation model, including: the acquisition module is used for acquiring the document representation corresponding to the document sample; a construction module for constructing a summary representation corresponding to the document representation based on the document representation, the summary representation comprising a positive summary representation and a negative summary representation; and the training module is used for constructing a total contrast loss function based on the document representation, the positive abstract representation and the negative abstract representation and training an abstract generation model based on the total contrast loss function.

According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the above aspects.

According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method according to any one of the above aspects.

According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of the above aspects.

According to the technical scheme disclosed by the invention, the accuracy of the abstract generation model can be improved.

It should be understood that the statements in this section do not necessarily identify key or critical elements of the embodiments of the present disclosure, nor do they necessarily limit the scope of the present disclosure. Other representations of the present disclosure will become readily apparent from the following description.

Drawings

The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:

FIG. 1 is a schematic diagram according to a first embodiment of the present disclosure;

FIG. 2 is a schematic diagram according to a second embodiment of the present disclosure;

FIG. 3 is a schematic diagram according to a third embodiment of the present disclosure;

FIG. 4 is a schematic diagram according to a fourth embodiment of the present disclosure;

FIG. 5 is a schematic diagram according to a fourth embodiment of the present disclosure;

FIG. 6 is a schematic diagram according to a fourth embodiment of the present disclosure;

FIG. 7 is a schematic diagram of an electronic device for implementing any one of the methods of training a summary generation model according to embodiments of the present disclosure.

Detailed Description

Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.

Fig. 1 is a schematic diagram according to a first embodiment of the present disclosure, which provides a method for training a summary generation model, including:

101. and acquiring a document representation corresponding to the document sample.

102. And acquiring the abstract samples corresponding to the document samples based on the document samples, wherein the abstract samples comprise positive abstract samples and negative abstract samples.

103. And constructing a total contrast loss function based on the document sample, the positive abstract sample and the negative abstract sample, and training an abstract generation model based on the total contrast loss function.

The executing subject of the embodiment may be a training device of the digest generation model, and the training device may be located in a terminal or a server, etc.

The document sample may be obtained from an existing data set, for example, by history collection or construction, and a large amount of existing data may be obtained, and the large amount of existing data may include existing documents, and the existing documents are used as the document sample.

The representation (representation) is information for describing data, and for example, for a pixel point, the representation may be RGB data or HSV data. The representation (representation) may generally describe the data in a vector form.

A summary generation model (which may be referred to simply as a model) refers to a model that processes a document to obtain a summary corresponding to the document. For example, as shown in fig. 2, in the application stage, a document to be summarized is input into a summary generation model, and the summary generation model processes the document and outputs the document as a summary corresponding to the document. The abstract corresponding to the document refers to key information in the document, and new words, phrases and the like which are not in the original document can be generated based on the abstract generated by the abstract generation model.

The positive summary representation refers to a representation of a positive summary sample corresponding to a document sample, and the negative summary representation refers to a representation of a negative summary sample corresponding to a document sample.

The positive abstract samples refer to abstract samples with consistent semantics with the document samples, and the negative abstract samples refer to abstract samples with inconsistent semantics with the document samples. For example, a document sample is a good movie, if a summary sample is also a good or similar semantic of the movie, such as excellent, ok, etc., the summary sample is a positive summary sample, and if a summary sample is a bad movie, such as bad, the summary sample is a negative summary sample.

In the related art, when a summary generation model is trained, a maximum likelihood probability function is generally adopted as a loss function, and the maximum likelihood probability function is a binary function and is constructed based on prediction representation and real representation of a sample. However, since the maximum likelihood probability function only reflects the statistical relationship, the semantic relationship cannot be accurately reflected, and the accuracy of the digest generation model is affected.

In the embodiment of the disclosure, during training of the abstract generation model, the loss function adopted is a contrast loss function, and the contrast loss function is a loss function constructed based on triples so as to better compare the relationship between positive and negative samples. The contrast loss function ultimately employed by the model training may be referred to as the overall contrast loss function in order to distinguish it from subsequently occurring contrast loss functions.

Specifically, the triplet on which the overall contrast loss function is based includes: a document representation, a positive digest representation, and a negative digest representation. The training objectives for the overall contrast loss function based on the above triplets are: samples that are semantically strongly related are represented at close distances, and samples that are semantically weakly related are represented at longer distances. Thus, in prediction (i.e., using the model to generate the abstract of the document), even if there is some fluctuation in the generated abstract due to noise, the model generates the abstract with better semantic relevance due to the fact that the texts with irrelevant semantics are far away from each other.

As shown in fig. 3, assuming that the document is a movie rating document about movie a and the movie rating document indicates that a is a good movie (a)), the abstract of the corresponding movie rating document may be ideally represented by a white dot, i.e., abstract: a is bar (movie a is great), even in the presence of noise, the generated summary can be represented by diagonal filled dots, i.e., it is possible that the summary: a is excellent (Ais awesome) or A may (The movie is Okay). However, since the distance between the semantically uncorrelated samples is very long when the model is trained, the semantically uncorrelated digests, such as the digests represented by black dots, are not generated: a is very bad (The movie is awful). It is to be understood that for simplicity of illustration, the dots in fig. 3 are shown within manifold (mainfold) space.

A total contrast loss function based on the triplet may be constructed based on the triplet, and then, the summary generation model may be trained based on the total contrast loss function, that is, the model parameters may be adjusted based on the total contrast loss function until the total contrast loss function converges.

In this embodiment, by constructing a total contrast loss function based on the document representation, the positive abstract representation corresponding to the document representation, and the negative abstract representation corresponding to the document representation, and training the abstract generating model based on the total contrast loss function, contrast learning can be introduced during model training, and the accuracy of the abstract generating model is improved.

As shown in fig. 2, the input of the digest generation model is one type of text (document) and the output is another type of text (digest), and thus, the digest generation model may be a sequence-to-sequence (seq2seq) model, which generally includes an encoder (encoder) and a decoder (decoder).

As shown in fig. 4, taking an example that the digest generation model includes an encoder and a decoder, in an application stage, a document is input into the encoder, the encoder processes the document to obtain an encoded representation, the encoded representation may be input into the decoder, the decoder processes the encoded representation to obtain a decoded representation, and then a text corresponding to the decoded representation may be obtained by looking up a table or the like based on the decoded representation to serve as a digest corresponding to the document. The sequence-to-sequence model including the encoder and decoder may employ a model structure in the related art, such as a transform model. In the training phase, the input document may be referred to as a document sample, and the corresponding encoded representation and/or decoded representation of the document sample may be represented as a document.

The sequence-to-sequence model scenario can be adapted by obtaining a document representation based on an encoder and decoder.

Fig. 5 is a schematic diagram according to a fifth embodiment of the present disclosure, which provides a method for training a summary generation model, and in conjunction with the structure shown in fig. 4, the method includes:

501. the document samples are processed using an encoder in a digest generation model to obtain an encoded representation.

502. The encoded representation is processed with a decoder in a digest generation model to obtain a decoded representation.

The encoder and decoder may employ those of the related art in a sequence-to-sequence model, such as a transform model.

503. And acquiring a generated text corresponding to the decoding representation.

The decoding expression is generally a multidimensional vector, a table of the corresponding relation between the vector and the text can be configured in advance, and the text corresponding to the decoding expression can be obtained by inquiring the table and used as a generated text.

504. And constructing a positive abstract sample and a negative abstract sample based on the generated text.

Wherein, a loop translation mode can be adopted to obtain the positive abstract sample.

That is, the generated text may be subjected to loop translation to obtain a loop translation result, and the loop translation result is taken as the positive abstract sample.

For example, the generated text is chinese 1, the english translation corresponding to chinese 1 is obtained by using the translator as english 0, and the chinese translation corresponding to english 0 is obtained by using the translator as chinese 2, so that chinese 2 is a loop translation result of chinese 1, that is, chinese 2 can be used as a positive abstract sample of chinese 1.

Through loop translation, a syntactically different and semantically consistent positive abstract sample can be constructed.

Wherein, the negative abstract sample can be obtained by adopting one or more items in the following items:

(1) and carrying out entity replacement on the generated text to obtain an entity replacement result, and taking the entity replacement result as the negative abstract sample.

For example, if the generated text includes the place name entity "beijing", the place name entity "beijing" may be replaced by another place name entity, such as "tianjin", to construct an error in the entity relationship, and the text including "tianjin" after replacement is used as the negative abstract sample.

(2) And carrying out pronoun replacement on the generated text to obtain a pronoun replacement result, and taking the pronoun replacement result as the negative abstract sample.

For example, if the generated text includes the person pronoun "he", then it can be replaced by "s" to construct an error on the person pronoun, and the replaced text including "s" is used as a negative abstract sample.

(3) And carrying out emotion replacement on the generated text to obtain an emotion replacement result, and taking the emotion replacement result as the negative abstract sample.

For example, positive sentences are replaced with negative sentences, specifically, for example, "yes" in the text is replaced with "no" to construct emotional errors, and the replaced text including "no" is used as a negative abstract sample

(4) And acquiring a similar text of the generated text, and taking the similar text as the negative abstract sample.

The similar text may refer to a text strongly similar to the generated text, and specifically, the similarity between the generated text and the existing candidate text may be calculated, and the candidate text with the highest similarity (top-1) or N (N settable) (top-N) higher candidates may be used as the negative abstract sample.

(5) And performing virtual countermeasure processing on the generated text to obtain a virtual countermeasure result, and taking the virtual countermeasure result as the negative abstract sample.

Virtual countermeasure is a data enhancement technique, and the key step of virtual countermeasure is to make the output of the model different from the output of the non-disturbance input by adding disturbance to the input. In this embodiment, through virtual confrontation, a disturbance may be added to the representation corresponding to the generated text, and the representation after the disturbance is added is taken as a negative abstract representation.

Through the negative abstract sample construction technology, a strong negative abstract sample which has a fact error and is difficult to distinguish on the surface layer can be constructed, and the model performance is effectively improved.

By constructing the positive abstract samples and the negative abstract samples based on the generated text, the number of samples can be enriched, and the model effect is improved.

505. And acquiring a positive abstract representation corresponding to the positive abstract sample and a negative abstract representation corresponding to the negative abstract sample.

Taking the positive abstract sample as an example, a word2vec model or other text-to-vector conversion model may be adopted to convert the positive abstract sample into a corresponding vector form as a positive abstract representation. Negative digest representations may also be obtained in a similar manner.

506. Constructing a first contrast loss function based on the encoded representation, the positive digest representation, and the negative digest representation; constructing a second contrast loss function based on the decoded representation, the positive digest representation, and the negative digest representation; and constructing a total contrast loss function based on the first contrast loss function and the second contrast loss function.

As shown in fig. 4, the positive abstract representation is represented by P, the negative abstract representation is represented by N, and this embodiment includes two semantic contrasts, one may be referred to as an input semantic contrast, and the other may be referred to as an output semantic contrast, where the contrast triplets of the input semantic contrast include: the encoding representation, the positive abstract representation and the negative abstract representation, and the comparison triple of semantic comparison at the output end comprises: a decoded representation, a positive digest representation, and a negative digest representation.

The specific form of the contrast loss function can be set as required, and a calculation formula can be as follows:

L=l1+l2

where L is the total contrast loss function, L1Is a first contrast loss function,/2Is a second contrast loss function, z0Is a coded representation, z'0Is a decoded representation, z1Is a positive abstract representation, zkIs a negative digest, n is the total number of negative digest representations, and τ is a preset hyperparameter.

By input-side semantic comparison, the fact-consistency of the decoded representation with the encoded representation can be learned, i.e. given one encoded representation and a plurality of decoded representations, the model can learn to give a larger similarity for correctly matched decoded representations and a smaller similarity for incorrectly matched decoded representations. The output semantic representation may learn the similarity between the output representations, i.e., the positive digest representation, which is consistent with the decoded representation fact, has a greater similarity and the positive digest representation has a lesser similarity to the negative digest representation.

507. And training a summary generation model based on the total contrast loss function.

For example, the total contrast loss function is used to adjust parameters of the summary generation model until the total contrast loss function converges, or a preset number of iterations is reached.

In this embodiment, through the two semantic comparisons, the phenomenon that the abstract generation model generates an abstract with a real-time error can be alleviated, and compared with an abstract generated by a common seq2seq model, the abstract generated by the abstract generation model is more faithful to an original text and ensures the generation quality. In addition, when the abstract generation model of the embodiment is adopted, preprocessing and post-processing are not needed to be performed on the document sample during training and the document during prediction, so that the training or prediction efficiency can be improved.

Fig. 6 is a schematic diagram of a fourth embodiment according to the present disclosure, which provides a training apparatus for a summary generation model. As shown in fig. 6, the training apparatus 600 for the abstract generation model includes: an acquisition module 601, a construction module 602, and a training module 603.

The obtaining module 601 is configured to obtain a document representation corresponding to a document sample; a construction module 602 is configured to construct, based on the document representation, a corresponding digest representation of the document representation, the digest representation including a positive digest representation and a negative digest representation; the training module 603 is configured to construct a total contrast loss function based on the document representation, the positive abstract representation, and the negative abstract representation, and train an abstract generation model based on the total contrast loss function.

In some embodiments, the summary generation model comprises: an encoder and a decoder, the obtaining module 601 is specifically configured to: processing the document sample with the encoder to obtain an encoded representation; processing the encoded representation with the decoder to obtain a decoded representation; and using the encoded representation and/or the decoded representation as the document representation.

In some embodiments, the document representation comprises the encoded representation and the decoded representation, and the training module 603 is specifically configured to: constructing a first contrast loss function based on the encoded representation, the positive digest representation, and the negative digest representation; constructing a second contrast loss function based on the decoded representation, the positive digest representation, and the negative digest representation; constructing an overall contrast loss function based on the first contrast loss function and the second contrast loss function.

In some embodiments, the document representation comprises the decoded representation, and the constructing module 602 is specifically configured to: acquiring a generated text corresponding to the decoding representation; constructing a positive abstract sample and a negative abstract sample based on the generated text; and acquiring a positive abstract representation corresponding to the positive abstract sample and a negative abstract representation corresponding to the negative abstract sample.

In some embodiments, the constructing module 602 is further specifically configured to: and performing loop translation on the generated text to obtain a loop translation result, and taking the loop translation result as the positive abstract sample.

In some embodiments, the constructing module 602 is further specifically configured to perform at least one of: carrying out entity replacement on the generated text to obtain an entity replacement result, and taking the entity replacement result as the negative abstract sample; carrying out pronoun replacement on the generated text to obtain a pronoun replacement result, and taking the pronoun replacement result as the negative abstract sample; performing emotion replacement on the generated text to obtain an emotion replacement result, and taking the emotion replacement result as the negative abstract sample; acquiring a similar text of the generated text, and taking the similar text as the negative abstract sample; and performing virtual countermeasure training on the generated text to obtain a virtual countermeasure result, and taking the virtual countermeasure result as the negative abstract sample.

In this embodiment, by constructing a total contrast loss function based on the document representation, the positive abstract representation corresponding to the document representation, and the negative abstract representation corresponding to the document representation, and training the abstract generating model based on the total contrast loss function, contrast learning can be introduced during model training, and the accuracy of the abstract generating model is improved.

It is to be understood that in the disclosed embodiments, the same or similar elements in different embodiments may be referenced.

It is to be understood that "first", "second", and the like in the embodiments of the present disclosure are used for distinction only, and do not indicate the degree of importance, the order of timing, and the like.

The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.

FIG. 7 illustrates a schematic block diagram of an example electronic device 700 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.

As shown in fig. 7, the electronic device 700 includes a computing unit 701, which may perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the electronic device 700 can also be stored. The computing unit 701, the ROM 702, and the RAM 703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.

A number of components in the electronic device 700 are connected to the I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, or the like; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the electronic device 700 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.

Computing unit 701 may be a variety of general purpose and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 701 performs the above-described respective methods and processes, such as a training method of the digest generation model. For example, in some embodiments, the training method of the digest generation model may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 708. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 700 via the ROM 702 and/or the communication unit 709. When loaded into RAM 703 and executed by the computing unit 701, may perform one or more steps of the above-described method of training a digest generation model. Alternatively, in other embodiments, the computing unit 701 may be configured by any other suitable means (e.g., by means of firmware) to perform the training method of the digest generation model.

Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.

Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.

In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.

To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.

The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.

The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service ("Virtual Private Server", or simply "VPS"). The server may also be a server of a distributed system, or a server incorporating a blockchain.

It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.

The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:实体识别模型的训练与实体识别方法、装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!