Generative text summarization system and method

文档序号:7711 发布日期:2021-09-17 浏览:30次 中文

1. A method for generating a textual summarization model, comprising:

receiving an input text data set;

expanding a search space for one or more candidate words to be selected for inclusion in the text excerpt, wherein the one or more candidate words included within the search space are ranked using a best-first search algorithm; and

re-ranking the one or more candidate words to be included in the text excerpt using a soft-bounds word reward (SBWR) algorithm, wherein the SBWR algorithm applies a decreasing reward value to the one or more candidate words when the text excerpt exceeds a predicted length threshold, and wherein the SBWR algorithm applies an increasing reward value to the one or more candidate words when the text excerpt is below the predicted length threshold.

2. The method of claim 1, wherein the SBWR algorithm selects the one or more candidate words when the text excerpt is equal to a prediction length threshold.

3. The method of claim 1 wherein the SBWR algorithm operates using the following formula:

4. the method of claim 1, further comprising: the decreasing prize value and the increasing prize value are smoothed using a sigmoid function.

5. The method of claim 1, further comprising: scaling the decremented reward value and the incremented reward value using values trained to select the one or more candidate words to be included in the text excerpt.

6. The method of claim 1, re-ranking the one or more candidate words when the input text data set exceeds a predefined length threshold.

7. The method of claim 1, further comprising: calculating a BP normalization that applies a penalty to the one or more candidate words that do not meet a prediction length threshold.

8. The method of claim 7, wherein the BP normalization is calculated by adding a logarithm of a brief penalty to a length-normalized scoring function.

9. The method of claim 8, wherein the short penalty is designed such that a generative text summarization model does not produce short translations from an input text dataset.

10. The method of claim 8, wherein the brief penalty comprises a replication rate value that reduces the brief penalty to zero.

11. The method of claim 1, further comprising: the generative text digest model is trained using a transformer neural model.

12. The method of claim 11, wherein the transformer neural model comprises an encoder machine learning algorithm and a decoder machine learning algorithm.

13. The method of claim 12, further comprising: inputting an input text data set to an encoder machine learning algorithm; and inputting the target abstract text data set to a decoder machine learning algorithm.

14. The method of claim 13, wherein the transformer neural model uses one or more source labels to determine probability values for one or more target digest labels.

15. The method of claim 14, wherein the transformer neural model determines probability values for the one or more target digest tags using the one or more source tags based on the following equation:

16. a system operable to employ a generative text summarization model, comprising:

a memory operable to store an input text data set;

a processor operable to:

expanding a search space for one or more candidate words to be selected for inclusion in the text excerpt, wherein the one or more candidate words included within the search space are ranked using a best-first search algorithm; and

re-ranking the one or more candidate words to be included in the text excerpt using a Soft Boundary Word Reward (SBWR) algorithm, wherein the SBWR algorithm applies a decreasing reward value to the one or more candidate words when the text excerpt exceeds a predicted length threshold, and wherein the SBWR algorithm applies an increasing reward value to the one or more candidate words when the text excerpt is below the predicted length threshold.

17. The system of claim 16, wherein the SBWR algorithm selects the one or more candidate words when the text excerpt equals a prediction length threshold.

18. The system of claim 16, wherein the processor is further operable to: the decreasing prize value and the increasing prize value are smoothed using a sigmoid function.

19. The system of claim 16, wherein the processor is further operable to: scaling the decreasing reward value and the increasing reward value using weighting values trained to select the one or more candidate words to be included in the text excerpt.

20. A non-transitory computer readable medium operable to employ a generative text summary model, the non-transitory computer readable medium having stored thereon computer readable instructions operable to be executed to perform functions of:

receiving an input text data set;

expanding a search space for one or more candidate words to be selected for inclusion in the text excerpt, wherein the one or more candidate words included within the search space are ranked using a best-first search algorithm; and

re-ranking the one or more candidate words to be included in the text excerpt using a Soft Boundary Word Reward (SBWR) algorithm, wherein the SBWR algorithm applies a decreasing reward value to the one or more candidate words when the text excerpt exceeds a predicted length threshold, and wherein the SBWR algorithm applies an increasing reward value to the one or more candidate words when the text excerpt is below the predicted length threshold.

Background

Text summarization strategies tend to employ machine learning algorithms to generate concise summaries of larger text. For example, a text excerpt may be used to generate a shorter paragraph abstract for a longer news article or a text article that may be tens to hundreds of pages long. The machine learning employed requires screening for redundant or unimportant information and generating summaries that accurately convey the meaning of larger text.

Disclosure of Invention

A system and method for generating a text summarization model is disclosed. The model may receive an input text data set and expand the search space for one or more candidate words to be selected for inclusion in the text excerpt. The model may include ranking the one or more candidate words within the search space using a best-first search algorithm (best-first search algorithm). The model may also use a soft-bound word-rewarded (SBWR) algorithm to re-rank one or more candidate words to be included in the text excerpt. It is contemplated that the SBWR algorithm may apply a decreasing reward value to one or more candidate words when the text excerpt exceeds a prediction length threshold. The SBWR algorithm may also apply an increased reward value to one or more candidate words when the text excerpt is below a prediction length threshold. The SBWR algorithm may further select one or more candidate words when the text excerpt equals a prediction length threshold.

The model may further smooth decreasing reward values and increasing reward values using a sigmoid function. The decremented reward value and the incremented reward value may be scaled using values trained to select one or more candidate words to be included in the text excerpt. Further, one or more candidate words may be re-ranked when the input text data set exceeds a predefined length threshold.

BP normalization can be computed to apply a penalty to one or more candidate words that do not meet a prediction length threshold. BP normalization can be calculated by adding the logarithm of the short penalty to a length normalization scoring function. Furthermore, the brief penalty may be designed such that the generative text summarization model does not produce short translations from the input text dataset. The short penalty may also include a replication rate value that reduces the short penalty to zero.

The generative text digest model may also be trained using a transformer neural model that includes an encoder machine learning algorithm and a decoder machine learning algorithm. During the training sequence, the input text data set may be input to an encoder machine learning algorithm; and the target digest text data set may be input to a decoder machine learning algorithm. The transformer neural model may also use one or more source labels to determine probability values for one or more target digest labels.

Drawings

FIG. 1 is an exemplary system employing a generative text summary neural model.

FIG. 2 is an exemplary flow chart for using a generative text digest neural model.

FIG. 3 is an exemplary source code portion for implementing an optimal priority search strategy.

FIG. 4 is an exemplary embodiment of a transformer neural model used to train a generative text digest neural model.

Detailed Description

Embodiments of the present disclosure are described herein. However, it is to be understood that the disclosed embodiments are merely examples and that other embodiments may take various and alternative forms. The figures are not necessarily to scale; some features may be exaggerated or minimized to show details of particular components. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a representative basis for teaching one skilled in the art to variously employ the embodiments. As one of ordinary skill in the art will appreciate, various features illustrated and described with reference to any one of the figures may be combined with features illustrated in one or more other figures to produce embodiments that are not explicitly illustrated or described. The combination of features illustrated provides a representative embodiment for a typical application. However, various combinations and modifications of the features consistent with the teachings of the present disclosure may be desired for particular applications or implementations.

Text summarization is generally a process of narrowing down larger text (e.g., long news articles) to generate a summary of contextually accurate content comprising the original input text data set. To generate an accurate summary, various algorithms may attempt to account for the length, writing style, and syntax of the original text. Two known methods for performing text summarization include extracting a summary and generating (i.e., abstracting) a summary. An abstract typically operates by selecting and using sentences from the original text as part of the abstract.

Alternatively, the generative digest may construct an internal semantic representation and create an abstract target digest from the original text using natural language generation techniques. Thus, the generative digest system may create an abstract target digest that is more accurate than an abstract digest. In addition, a generative abstract may create a more abstract and express a more closely similar meaning to the original text.

It is envisaged that a transformer neural framework employing word embedding and encoder-decoder structures may be used to improve the output summary of the generative summary system. During the decoding phase, multiple digest hypotheses may be generated as candidates for the system to select as digest output. If the search strategy employs a known "beam search" algorithm, the possible output candidates may look like each other with only minor variations over a given word. Thus, it is also contemplated that different strategies may be employed, wherein the search space of the summary candidates may be expanded first.

For example, a best-first search algorithm may be employed to expand the search space, thereby generating more diverse candidates. Once diversified, candidate summaries with different styles or different emphasis on the information can be selected. A re-ranking method may then be employed to select the best candidate as output. The re-ranking method may employ a soft-boundary word reward (SBWR) algorithm that selects the best candidate as output for the summary.

FIG. 1 illustrates an exemplary system 100 that can be used to employ a generative text summary neural model. The system 100 may include at least one computing device 102. The computing system 102 may include at least one processor 104 operatively connected to a memory unit 108. The processor 104 may be one or more integrated circuits that implement the functionality of the Processing Unit (PU) 106. The PU 106 may be a commercially available Central Processing Unit (CPU) that implements instructions such as one of the x86, ARM, Power, or MIPS instruction set families. Alternatively, processing unit 106 may be a commercially available Graphics Processing Unit (GPU) that consists of hundreds of cores operable to process a large number of parallel tasks (i.e., parallel computations) simultaneously.

During operation, the PU 106 may execute stored program instructions retrieved from the memory unit 108. The stored program instructions may include software that controls the operation of the PU 106 to perform the operations described herein. In some examples, the processor 104 may be a system on a chip (SoC) that integrates the functionality of the PU 106, the memory unit 108, the network interface, and the input/output interface into a single integrated device. Computing system 102 may implement an operating system for managing various aspects of operations.

Memory unit 108 may include volatile and non-volatile memory for storing instructions and data. Non-volatile memory may include solid-state memory, such as NAND flash memory, magnetic and optical storage media, or any other suitable data storage device that retains data when the computing system 102 is deactivated or loses power. Volatile memory can include static and dynamic Random Access Memory (RAM), which stores program instructions and data. For example, the memory unit 108 may store a machine learning model 110 or algorithm, a training data set 112 of the machine learning model 110, and/or raw source data 115.

The computing system 102 may include a network interface device 122, the network interface device 122 configured to provide communication with external systems and devices. For example, the network interface device 122 may include a wired and/or wireless ethernet interface as defined by the Institute of Electrical and Electronics Engineers (IEEE) 802.11 family of standards. The network interface device 122 may include a cellular communication interface for communicating with a cellular network (e.g., 3G, 4G, 5G). The network interface device 122 may be further configured to provide a communication interface to an external network 124 or cloud.

The external network 124 may be referred to as the world wide web or the internet. External network 124 may establish standard communication protocols between computing devices. External network 124 may allow information and data to be readily exchanged between the computing device and the network. One or more servers 130 may communicate with the external network 124.

Computing system 102 may include input/output (I/O) interface 120, which may be configured to provide digital and/or analog input and output. The I/O interface 120 may include an additional serial interface (e.g., a Universal Serial Bus (USB) interface) for communicating with external devices.

Computing system 102 can include a human-machine interface (HMI) device 118, where HMI device 118 can include any device that enables system 100 to receive control inputs. Examples of input devices may include human interface inputs such as a keyboard, mouse, touch screen, voice input device, and other similar devices. Computing system 102 may include a display device 132. Computing system 102 may include hardware and software for outputting graphical and textual information to a display device 132. Display device 132 may include an electronic display screen, a projector, a printer, or other suitable device for displaying information to a user or operator. The computing system 102 may be further configured to allow interaction with remote HMIs and remote display devices via the network interface device 122.

System 100 may be implemented using one or more computing systems. While this example depicts a single computing system 102 implementing the described features, it is intended that the various features and functions may be separated and implemented by multiple computing units in communication with each other. The architecture selected may depend on a variety of factors.

The system 100 may implement a machine learning algorithm 110 configured to analyze raw source data 115 (or a data set). The raw source data 115 may include raw or unprocessed sensor data, which may represent an input data set for a machine learning system. Raw source data 115 may include video, video clips, images, and raw or partially processed sensor data (e.g., data from a digital camera or LiDAR sensor). In some examples, the machine learning algorithm 110 may be a neural network algorithm (e.g., transformer, CNN, RNN, or DNN), which may be designed to perform a predetermined function.

Fig. 2 illustrates an exemplary flow diagram 200 employing a generative text digest neural model. The flow diagram 200 may begin at block 202, where a plain text dataset may be provided as input to a generative summarization system at block 202. The data set may be text provided from a keyboard, or the text may be provided from one or more documents stored in memory 118. The text may also be a web page or document provided from the external network 124.

The flow diagram may then proceed to block 204 where a decoding stage may be employed to determine an optimal output summary based on the input text data set at block 204. It is contemplated that a "beam search" algorithm may be employed to determine a near-optimal solution from the sequence marker decoding process. Preferably, an optimal first search strategy (e.g., greedy optimal first search or pure heuristic search) may be employed in favor of a given candidate (i.e., a possible selection), and then an optimal score may be assigned to the candidate.

FIG. 3 illustrates exemplary source code portions for implementing an optimal priority search strategy. As illustrated, the best-first search strategy may employ a priority heap that maintains a partial summary of the input text. The partial digests may be scored according to a heuristic function. The best first search algorithm may iteratively take the highest scoring partial digest and expand the partial digest by one word. The newly extended digest sequence may then be placed (i.e., pushed) back into the priority heap. The best first search strategy may also generate the top k number of candidates for a new digest sequence. It is contemplated that the highest probability score (P) may be generated by selecting the word and iteratively appending the selected word to a partial summary: (y) Generating the top k. The highest probability can be generated using equation 1 below:

wherein the highest probability score (P) The log value of (A) may be based on the input text (A)x) Is bit-wise OR (OR)) "function will partially abstract () Is appended to the current partial summary (y j )。

The flow diagram may then proceed to block 206 where a re-ranking process is employed on the summary candidates and the candidate that produces the best result is selected at block 206. It is envisaged that in addition to the best-first search process of expanding the search space to provide one or more diverse candidates, a re-ranking process may still be necessary to rank the candidates.

For example, one important aspect to consider during text summarization is the length of the input text data (e.g., the length of the input text sentence or character string). The best first search strategy will typically provide enhanced results (i.e., higher output scores) for shorter length candidates. However, short summaries may be too abstract and may lose key information from the original text. In fact, in some applications, a too short summary comprising only a few words may not be an informative summary, even though the best-first search strategy may generate a high log score using equation 1 above.

Length normalization, which adjusts term frequency or relevance scores, may be employed to normalize the effect of text length on document ranking. Length normalization may be particularly employed so that longer text strings or sentences are considered for re-ranking. It is generally understood that length normalization may provide better results than beam search algorithms. A short penalty (BP) normalization value (may then be calculated) To ensure that the input text is suitable for the summarization task. The BP-norm value may also apply a penalty to digests that do not meet a predefined expected length. The BP-norm algorithm may be implemented by a logarithm value (of a brief penalty) And length normalized scoring function () Is calculated by addition, as shown in the following equation 2:

[ equation 2 ]]

Wherein x is as definedAnd y is an input sequence that can be defined asThe output assumption of (2). It is envisaged that a brief penalty (bp) that can be used to penalize short translations can be calculated using equation 3 below:

[ equation 3 ]]

Where r is the replication rate, which may include the percentage of summary tokens seen in the source text scaled by a factor c. It is contemplated that when the replication rate r is set to 1, the penalty may be reduced to a value close to or equal to 0. The penalty term may be further modified to favor a summary with more duplicate content from the source text, as shown in equations 4A and 4B below:

[ equation 4a]

[ equation 4b]

The calculated penalty term can be directly converted into a coefficient multiplied by the log-likelihood score. Next, a soft-boundary word reward (SBWR) algorithm may be employed to re-rank the candidates, as shown in equation 5 below:

[ equation 5 ]]

The SWBR algorithm may assign a reward to each word in the summary. If the decoded digest length is greater than the predicted length threshold (i.e.,) The SWBR algorithm will apply a diminishing reward to the added word. The decrement of the reward may be defined as. When the decoded digest length is shorter than the expected threshold (i.e.,) The SWBR algorithm will reward each word. It is expected that the SWBR algorithm may prefer the closest prediction length(s) ((s))) Is used as a candidate of (1). Further, the sigmoid function may be used to smooth the reward value, and the coefficient (r) may be used to scale the total reward tuned according to the verification data. The flow diagram may then proceed to block 208 where an output text summary is generated based on the words that receive the highest reward through the SWBR algorithm at block 208.

FIG. 4 illustrates an exemplary embodiment of a transformer neural model 400 that may be used to train a generative text digest neural model. The transformer neural model 400 may include an encoder structure 404 and a decoder structure 408. To train the system, input source text 402, which may include a series of tokens, may be input into an encoder module 404. In addition, target abstract text 406, which may also include a series of text or text strings, may be input into the decoder structure 408. It is contemplated that given a sequence of source labels as shown in equation 6 below, the transformer neural model 400 may determine the probability of a target digest label:

[ equation 6 ]]

WhereinyCan be defined asIs marked with a target digest, andxcan be defined asThe source marker sequence of (1).

FIG. 4 also illustrates that during the training phase, both input (source) text 402 and target summary text 406 may be presented as training instances to maximize the loss function or maximum likelihood of observing a given set of training instances. In the decoding phase, given the parameters learned by the transformer neural model 400, the generative summarization system may determine the output (y) using equation 7 below:

[ equation 7 ]]。

The processes, methods, or algorithms disclosed herein may be delivered to/implemented by a processing device, controller, or computer, which may include any existing programmable or dedicated electronic control unit. Similarly, the processes, methods or algorithms may be stored as data and instructions executable by a controller or computer in a variety of forms including, but not limited to, information permanently stored on non-writable storage media such as ROM devices and information replaceably stored on writable storage media such as floppy disks, magnetic tapes, CDs, RAM devices and other magnetic and optical media. A process, method, or algorithm may also be implemented in a software executable object. Alternatively, the processes, methods, or algorithms may be embodied in whole or in part using suitable hardware components such as Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), state machines, controllers or other hardware components or devices, or a combination of hardware, software, and firmware components.

While exemplary embodiments are described above, it is not intended that these embodiments describe all possible forms encompassed by the claims. The words used in the specification are words of description rather than limitation, and it is understood that various changes may be made without departing from the spirit and scope of the disclosure. As previously mentioned, the features of the various embodiments may be combined to form further embodiments of the invention that may not be explicitly described or illustrated. While various embodiments may have been described as providing advantages over or being preferred over other embodiments or prior art implementations in terms of one or more desired characteristics, those of ordinary skill in the art realize that one or more features or characteristics may be compromised to achieve desired overall system attributes, which depend on the specific application and implementation. These attributes may include, but are not limited to, cost, strength, durability, life cycle cost, marketability, appearance, packaging, size, serviceability, weight, manufacturability, ease of assembly, and the like. As such, to the extent that any embodiment is described as not being desirable as other embodiments or prior art implementations, such embodiments are not outside the scope of the present disclosure, and may be desirable for particular applications, in terms of one or more characteristics.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:信息显示方法、装置、电子设备及可读存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!