Sample processing method, device, equipment and medium

文档序号:8303 发布日期:2021-09-17 浏览:50次 中文

1. A method of sample processing, the method comprising:

acquiring an initial training sample of a preset text matching model, and clustering query texts in the initial training sample, wherein the query texts are keywords input into the preset text matching model;

and according to the clustering result and the time stamp of each initial training sample, carrying out duplicate removal and correction on the negative samples in the initial training samples to obtain the target model training samples.

2. The method of claim 1, wherein the clustering all query texts in the initial training sample comprises:

converting the query text into a text vector;

selecting a preset number of text vectors from the text vectors as clustering centers based on a genetic algorithm, and performing text vector clustering processing;

and finishing the clustering processing when the clustering effect meets the preset condition.

3. The method according to claim 2, wherein the selecting a preset number of text vectors as clustering centers from the text vectors based on a genetic algorithm for performing text vector clustering comprises:

randomly selecting a preset number of text vectors as initial population points of a first generation of a genetic algorithm and clustering center points in a first clustering, and performing genetic calculation and clustering analysis;

and taking the initial population point of the past iteration in the genetic algorithm as the cluster center point of the past cluster analysis.

4. The method according to claim 3, wherein when the clustering effect satisfies a preset condition, completing the clustering process comprises:

and when the cost function in the clustering algorithm and the cost function in the genetic algorithm simultaneously meet the convergence condition, ending the clustering operation and the iteration process of the genetic algorithm.

5. The method of claim 1, wherein the removing and correcting the negative samples in the initial training samples according to the result of the clustering process and the time stamp of each initial training sample comprises:

aiming at the text vectors belonging to the same class in the clustering result, grouping the initial training samples according to the time stamps of the initial training samples corresponding to the text vectors;

when the initial training samples in the same group simultaneously comprise positive samples and negative samples, correcting the negative samples in the group into positive samples, and removing the corrected positive samples into a positive sample;

and when the initial training samples in the same packet are all negative samples, removing the weight of each negative sample into a negative sample.

6. The method of claim 5, wherein grouping the initial training samples according to the timestamps of the initial training samples corresponding to the text vectors comprises:

sequencing the initial training samples according to the time sequence of the time stamps of the initial training samples corresponding to the text vectors;

and taking the initial training samples belonging to the same time window with the preset length in the sorted initial training samples as a group of samples.

7. The method of claim 1, further comprising:

and performing model training on the preset text matching model through the target model training sample to obtain a target text matching model.

8. The method of claim 7, further comprising:

acquiring text matching keywords;

and inputting the text matching keywords into the target text matching model to obtain a target text matching result.

9. A sample processing device, the device comprising:

the text clustering module is used for acquiring an initial training sample of a preset text matching model and clustering query texts in the initial training sample, wherein the query texts are keywords input into the preset text matching model;

and the sample processing module is used for removing the weight and correcting the negative samples in the initial training samples according to the clustering result and the time stamps of the initial training samples to obtain the target model training samples.

10. A computer device, characterized in that the computer device comprises:

one or more processors;

a memory for storing one or more programs;

when executed by the one or more processors, cause the one or more processors to implement the sample processing method of any of claims 1-8.

11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a sample processing according to any one of claims 1-8.

Background

In the knowledge question-answering system, text content associated with query text, such as a plurality of articles associated with keywords, is provided for a user according to the query text content input by the user, so that the user can click and read. The ranking of the articles fed back to the user by the knowledge question-answering system can directly influence the use experience of the user on the knowledge question-answering system.

The knowledge question-answering system is characterized in that after a user inputs a query text, whether the behavior of a relevant article fed back by the system is clicked or not is respectively used as a positive sample and a negative sample of the model training of the knowledge question-answering system.

However, in the process of implementing the present invention, at least the following technical problems are found in the prior art: in the knowledge question-answering system model training process, the selection of the negative sample is too coarse, in some cases, the knowledge question-answering system does not have the behavior of being clicked according to an article fed back by a query text, the article is not the negative sample, the quality of the model training sample needs to be improved, and the knowledge question-answering system model with the learning result depending on the quality of the sample data needs to be further optimized.

Disclosure of Invention

The embodiment of the invention provides a sample processing method, a sample processing device, sample processing equipment and a sample processing medium, so that the quality of a model training sample is improved, a text matching model is better learned, and the accuracy of an output result of the trained text matching model is higher.

In a first aspect, an embodiment of the present invention provides a sample processing method, where the method includes:

acquiring an initial training sample of a preset text matching model, and clustering query texts in the initial training sample, wherein the query texts are keywords input into the preset text matching model;

and according to the clustering result and the time stamp of each initial training sample, carrying out duplicate removal and correction on the negative samples in the initial training samples to obtain the target model training samples.

In a second aspect, an embodiment of the present invention further provides a sample processing apparatus, including:

the text clustering module is used for acquiring an initial training sample of a preset text matching model and clustering query texts in the initial training sample, wherein the query texts are keywords input into the preset text matching model;

and the sample processing module is used for removing the weight and correcting the negative samples in the initial training samples according to the clustering result and the time stamps of the initial training samples to obtain the target model training samples.

In a third aspect, an embodiment of the present invention further provides a computer device, where the computer device includes:

one or more processors;

a memory for storing one or more programs;

when executed by the one or more processors, cause the one or more processors to implement a method of sample processing as provided by any of the embodiments of the invention.

In a fourth aspect, the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a sample processing method as provided in any embodiment of the present invention.

The embodiment of the invention has the following advantages or beneficial effects:

in the embodiment of the invention, clustering is carried out on the query text in the initial training sample of the preset text matching model, namely the keywords input into the preset text matching model; and then, removing the weight of the sample and correcting the clustered query texts according to the category and the corresponding sample timestamp, namely removing the weight of a plurality of initial training samples generated within a certain time or correcting the labels corresponding to the samples, and finally obtaining the target model training sample with higher sample data quality. The problem of negative sample label mistake and repetition rate are high in the training sample data of the preset text matching model that gathers among the prior art, lead to sample data quality low is solved, realized carrying out the sample to remove the heavy according to inquiring text similarity and sample timestamp in the initial training sample, promote the quality of the training sample of presetting the text matching model.

Drawings

FIG. 1 is a flow chart of a sample processing method according to an embodiment of the present invention;

FIG. 2 is a data diagram of a text query record according to an embodiment of the present invention;

FIG. 3 is a diagram illustrating a text matching result of a text query according to an embodiment of the present invention;

FIG. 4 is a flowchart of a sample processing method according to a second embodiment of the present invention;

fig. 5 is a schematic diagram of a query text clustering analysis process according to a second embodiment of the present invention;

FIG. 6 is a schematic structural diagram of a sample processing device according to a third embodiment of the present invention;

fig. 7 is a schematic structural diagram of a computer device according to a fourth embodiment of the present invention.

Detailed Description

The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.

Example one

Fig. 1 is a flowchart of a sample processing method according to an embodiment of the present invention, which is applicable to a case of constructing a training sample of a high-quality text matching model/question-and-answer model. The method may be performed by a sample processing apparatus, which may be implemented in software and/or hardware, integrated in a computer device having application development functionality.

As shown in fig. 1, the sample processing method includes the steps of:

s110, obtaining an initial training sample of a preset text matching model, and clustering query texts in the initial training sample, wherein the query texts are keywords input into the preset text matching model.

The preset text matching model can be a question-answer model of a knowledge question-answer system and is used for matching answers to inquired questions; or an article matching model, which matches related text content for the input keywords or keywords, and correspondingly, the query text is used as the keywords input into the preset text matching model, and may be a single word, a word or phrase formed by multiple words, or texts with different lengths such as sentences. Generally, the preset text matching model outputs results that match the query text in a sorted manner, and the accuracy of the sorting of the output results depends on the CTR (Click-Through-Rate) or Click Through Rate technique. The disciplined sample of text matching models that needs to be ranked at the output result is typically one in which the line of text clicked or unchecked after the query is used as the model training sample. Assuming that after a user inputs a query text in a preset text matching model such as a knowledge question-answering system, the question-answering system feeds back 20 related texts, and the user clicks one of the texts, 20 samples including 1 positive sample and 19 negative samples can be finally acquired, that is, a behavior of clicking to view the feedback text in one query is a positive sample, and a behavior of clicking but not clicking the text in the same text query is a negative sample.

However, the negative sample obtained by the above sample collection method has a large noise. For example, as shown in the text query data table shown in fig. 2, a sample contains information such as time, sample number, query text, user identifier, and click text identifier. A user inputs 4 different query texts in a few seconds, the corresponding letters A, B, C, D and E represent different characters respectively, the user has not clicked any text in the previous 3 queries, the text is marked as nan (indicating null), and in the 4 th query, the content of the text marked as 21508 is clicked, so that 4 query records are taken as 3 negative samples and 1 positive sample. However, further referring to the text matching results presented according to the query text shown in fig. 3, the articles of interest to the user, which are identified as 21508, are presented in all 4 query records and are all arranged in the first position, but for some reasons the user does not click to view the articles after the previous queries. That is, the four query actions do not show exactly the same text list, but the first article is the same user without clicking. The query texts of the four queries are similar and can even be considered as one query, and the negative examples determined by the table are not necessarily true negative examples. The reason for this phenomenon may be that after the user inputs the query text, although any matched text is not clicked, the user already obtains the desired information in the partial abstract content displayed in the text list and does not look over any information; or due to the jamming of the network, the user repeatedly inputs different but similar query texts for searching the corresponding text content.

In this embodiment, in order to cluster the noise in the low negative sample, the query text is first clustered, so that the negative sample in the initial training sample is semantically corrected and deduplicated according to the similarity of the query text.

Specifically, the clustering of the query texts may be performed by selecting an algorithm suitable for text classification in a common clustering method such as a partition method, a hierarchical method, a density-based method, a grid-based method, or a model-based method. Illustratively, a K-means clustering method in a partition method may be adopted, in N query texts, K groups are constructed, and finally K clustering categories are obtained, where N and K are both natural numbers, and K is smaller than K. In the process of cluster analysis, K grouped cluster centers can be selected randomly or according to a preset classification rule for clustering to obtain a clustering result.

And S120, removing the weight and correcting the negative samples in the initial training samples according to the clustering result and the time stamp of each initial training sample to obtain the target model training samples.

Specifically, when the negative samples are deduplicated and modified, the acquisition time of each initial training sample is considered in addition to the semantic approximation degree of the query text. Only queries that are within a close time are likely to be considered as one query. For the query texts belonging to the same class in the clustering result, the initial training samples can be grouped according to the time stamps of the initial training samples corresponding to the query texts. That is, a time window is preset, and the text query and the matching in the same time window can be considered as a query. In the implementation process, the initial training samples may be ranked according to the time sequence of the timestamps of the initial training samples corresponding to the respective query texts, and assuming that the length of the time window is 20 seconds, starting from the initial training sample ranked first, the initial training texts corresponding to the timestamps whose difference with the timestamp of the initial training sample ranked first is within 20 seconds may be grouped into one group, and the same sample may not be grouped into two sample groups.

Further, when the initial training samples in the same group simultaneously comprise positive samples and negative samples, correcting the negative samples in the group into positive samples, and removing the corrected positive samples into a positive sample; and when the initial training samples in the same packet are all negative samples, removing the weight of each negative sample into a negative sample. Still taking the 4 query record samples in fig. 2 as an example, the 4 samples can be grouped into one group, and the 4 samples can be de-duplicated into 1 positive sample. Through the processing of the step, the accuracy of the model training sample data set is improved, meanwhile, the redundancy of the data set can be reduced, and the effect of optimizing the quality of the model training sample is achieved.

According to the technical scheme of the embodiment, clustering is carried out on the query text in the initial training sample of the preset text matching model, namely the keywords input into the preset text matching model; and then, removing the weight of the sample and correcting the clustered query texts according to the category and the corresponding sample timestamp, namely removing the weight of a plurality of initial training samples generated within a certain time or correcting the labels corresponding to the samples, and finally obtaining the target model training sample with higher sample data quality. The problem of negative sample label mistake and repetition rate are high in the training sample data of the preset text matching model that gathers among the prior art, lead to sample data quality low is solved, realized carrying out the sample to remove the heavy according to inquiring text similarity and sample timestamp in the initial training sample, promote the quality of the training sample of presetting the text matching model.

Example two

Fig. 2 is a flowchart of a sample processing method provided in the second embodiment of the present invention, and this embodiment and the sample processing method in the foregoing embodiments belong to the same inventive concept, and further describe a text clustering process of sample processing. The method may be performed by a sample processing apparatus, which may be implemented in software and/or hardware, integrated in a computer device having application development functionality.

As shown in fig. 2, the sample processing method includes the steps of:

s210, obtaining an initial training sample of a preset text matching model, and converting a query text in the initial training sample into a text vector.

In the step, data preprocessing is mainly performed on the initial training sample, and the query text is converted into a text vector, so that a computer can directly and effectively understand the meaning of the text. Specifically, the conversion of the text vector can be realized by a word2vec tool, that is, the process of vectorizing the original text to be represented as the text vector in fig. 5.

S220, selecting a preset number of text vectors from the text vectors as a clustering center based on a genetic algorithm, carrying out text vector clustering processing, and finishing the clustering processing when a clustering effect meets a preset condition.

When the cluster analysis of the query text, that is, the corresponding text vector, is performed, whether the selection of the cluster centers is reasonable or not affects the convergence condition of the cost function in the clustering algorithm, and compared with the case that a certain number of cluster centers are randomly selected for cluster analysis, it is expected that the cluster centers which can make the classification result more accurate and have better clustering effect can be found, and the final cluster analysis is completed.

Therefore, in this embodiment, a plurality of different groups of clustering centers are determined through population iteration of a genetic algorithm, and a plurality of clustering processes are performed to determine an optimal clustering center and an optimal clustering effect. Specifically, reference may be made to the text vector clustering process shown in fig. 5:

firstly, randomly selecting k vectors from text vectors subjected to vectorization expression as initial population points of a first generation of a genetic algorithm and initial clustering centers in a clustering algorithm 'k-means algorithm'.

In the genetic algorithm, iteration times can be preset, randomly selected k vectors are initial population of the first generation, the initial population obtains new population through population variation, population crossing, cost function calculation and roulette selection processes, when the cost function value of the genetic algorithm model is not reduced by the new population, the optimal population is obtained, and further the initial population point of the next generation can be determined based on the optimal population.

In the clustering algorithm 'k-means algorithm', k vectors selected randomly are used as initial clustering centers of first clustering, and further, in the genetic algorithm, initial population points of each generation are used as initial clustering centers of second clustering, so that a plurality of clustering analysis processes are carried out. In each clustering analysis process, calculating the distance from the text vector of the non-clustering center to the text vector of each initial clustering center, and classifying the text vector of each non-clustering center to the initial clustering center with the minimum distance; then, the central point of each classified cluster is recalculated, and when the central point is not changed any more, a clustering process is completed. At the end of each clustering analysis process, the sum of the distances from each text vector in the cluster to the corresponding clustering center and the sum of the distances from the clustering centers of each cluster in the clustering result are counted to be used as clustering cost to judge the clustering effect. Because, a good clustering model requires small inter-cluster spacing and large inter-cluster spacing. After the query text is expressed by text vectorization, the clustering effect is more suitably measured by Euclidean distance. In a possible implementation manner, the cost function in the genetic algorithm is the same as the clustering cost function in the k-means clustering process, and the euclidean distance value of the vectors between clusters and in-cluster is used as the numerical standard for judgment.

With multiple iterations in the genetic algorithm, a corresponding number of k-means cluster analysis processes are also performed. Therefore, calculation results of a plurality of cost functions can be obtained, and when the cost functions in the clustering algorithm and the cost functions in the genetic algorithm simultaneously meet the convergence condition, the clustering operation and the iterative process of the genetic algorithm can be ended to determine the final clustering result. The cost function satisfies a convergence condition, which may be that in the clustering result, the intra-cluster distance has reached a minimum value, or the cost function is considered to have converged when the intra-cluster distance and the inter-cluster distance, and the numerical value reach an optimal result, are considered comprehensively.

In this embodiment, based on a genetic algorithm, in consideration of influence between generations, different initial population points (i.e., initial clustering centers) are iteratively selected, a clustering effect is optimized, so that each text vector can achieve a better semantic classification effect, and the quality of a model training sample is further improved.

And S230, removing the weight and correcting the negative samples in the initial training samples according to the clustering result and the time stamp of each initial training sample to obtain the target model training samples.

According to the technical scheme of the embodiment, clustering processing is performed on the basis of a genetic algorithm through a query text in an initial training sample of a preset text matching model, namely, keywords input into the preset text matching model; and then, removing the weight of the sample and correcting the clustered query texts according to the category and the corresponding sample timestamp, namely removing the weight of a plurality of initial training samples generated within a certain time or correcting the labels corresponding to the samples, and finally obtaining the target model training sample with higher sample data quality. The problem of negative sample label mistake and repetition rate are high in the training sample data of the preset text matching model that gathers among the prior art, lead to sample data quality low is solved, realized carrying out the sample to remove the heavy according to inquiring text similarity and sample timestamp in the initial training sample, promote the quality of the training sample of presetting the text matching model.

Furthermore, after the training samples of the preset text matching model are processed, the optimized samples can be used for model training, so that the model can be better learned, and the target text matching model can be obtained. When the target text matching model is used, the obtained text matching keywords can be used as query texts to be input into the target text matching model, and a target text matching result is obtained.

In a specific example, a knowledge question-answering system is used for testing, and the text matching effectiveness is improved from 81% to 85% after a model training sample is optimized by a sample processing method under the time window length of 30S. The inquiry experience of customer service is improved. The following is an embodiment of a sample processing apparatus according to an embodiment of the present invention, which belongs to the same inventive concept as the sample processing methods of the above embodiments, and can implement the sample processing methods of the above embodiments. Reference may be made to the above-described embodiments of the sample processing method, for details not described in detail in the embodiments of the sample processing device.

EXAMPLE III

Fig. 6 is a schematic structural diagram of a sample processing apparatus according to a third embodiment of the present invention, which is applicable to a case of constructing a training sample of a high-quality text matching model/question-answering model, and the apparatus can be implemented by software and/or hardware and is integrated in a computer device with an application development function.

As shown in fig. 6, the sample processing device includes: a text clustering module 310 and a sample processing module 320.

The text clustering module 310 is configured to obtain an initial training sample of a preset text matching model, and perform clustering processing on a query text in the initial training sample, where the query text is a keyword input into the preset text matching model; and the sample processing module 320 is configured to perform deduplication and correction on negative samples in the initial training samples according to the result of the clustering process and the time stamps of the initial training samples to obtain target model training samples.

According to the technical scheme of the embodiment, clustering is carried out on the query text in the initial training sample of the preset text matching model, namely the keywords input into the preset text matching model; and then, removing the weight of the sample and correcting the clustered query texts according to the category and the corresponding sample timestamp, namely removing the weight of a plurality of initial training samples generated within a certain time or correcting the labels corresponding to the samples, and finally obtaining the target model training sample with higher sample data quality. The problem of negative sample label mistake and repetition rate are high in the training sample data of the preset text matching model that gathers among the prior art, lead to sample data quality low is solved, realized carrying out the sample to remove the heavy according to inquiring text similarity and sample timestamp in the initial training sample, promote the quality of the training sample of presetting the text matching model.

Optionally, the text clustering module 310 specifically includes:

the vector conversion sub-module is used for converting the query text into a text vector;

the text clustering submodule is used for selecting a preset number of text vectors from the text vectors as a clustering center based on a genetic algorithm to perform text vector clustering processing; and finishing the clustering processing when the clustering effect meets the preset condition.

Optionally, the text clustering sub-module is specifically configured to:

randomly selecting a preset number of text vectors as initial population points of a first generation of a genetic algorithm and clustering center points in a first clustering, and performing genetic calculation and clustering analysis;

and taking the initial population point of the past iteration in the genetic algorithm as the cluster center point of the past cluster analysis.

Optionally, the text clustering sub-module is further configured to:

and when the cost function in the clustering algorithm and the cost function in the genetic algorithm simultaneously meet the convergence condition, ending the clustering operation and the iteration process of the genetic algorithm.

Optionally, the sample processing module 320 is specifically configured to:

aiming at the text vectors belonging to the same class in the clustering result, grouping the initial training samples according to the time stamps of the initial training samples corresponding to the text vectors;

when the initial training samples in the same group simultaneously comprise positive samples and negative samples, correcting the negative samples in the group into positive samples, and removing the corrected positive samples into a positive sample;

and when the initial training samples in the same packet are all negative samples, removing the weight of each negative sample into a negative sample.

Optionally, the sample processing module 320 is further configured to:

sequencing the initial training samples according to the time sequence of the time stamps of the initial training samples corresponding to the text vectors;

and taking the initial training samples belonging to the same time window with the preset length in the sorted initial training samples as a group of samples.

Optionally, the sample processing device further comprises:

and the model training module is used for performing model training on the preset text matching model through the target model training sample to obtain a target text matching model.

Optionally, the sample processing device further comprises:

the text matching module is used for acquiring text matching keywords; and inputting the text matching keywords into the target text matching model to obtain a target text matching result.

The sample processing device provided by the embodiment of the invention can execute the sample processing method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.

Example four

Fig. 7 is a schematic structural diagram of a computer device according to a fourth embodiment of the present invention. FIG. 7 illustrates a block diagram of an exemplary computer device 12 suitable for use in implementing embodiments of the present invention. The computer device 12 shown in fig. 7 is only an example and should not bring any limitations to the functionality or scope of use of the embodiments of the present invention. The computer device 12 may be any terminal device with computing capability, such as a terminal device of an intelligent controller, a server, a mobile phone, and the like.

As shown in FIG. 7, computer device 12 is in the form of a general purpose computing device. The components of computer device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.

Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.

Computer device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer device 12 and includes both volatile and nonvolatile media, removable and non-removable media.

The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)30 and/or cache memory 32. Computer device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 7, and commonly referred to as a "hard drive"). Although not shown in FIG. 7, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. System memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.

A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in system memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of the described embodiments of the invention.

Computer device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with computer device 12, and/or with any devices (e.g., network card, modem, etc.) that enable computer device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Also, computer device 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via network adapter 20. As shown, network adapter 20 communicates with the other modules of computer device 12 via bus 18. It should be appreciated that although not shown in FIG. 7, other hardware and/or software modules may be used in conjunction with computer device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.

The processing unit 16 executes various functional applications and data processing by executing programs stored in the system memory 28, for example, implementing a sample processing method provided by the present embodiment, the method including:

acquiring an initial training sample of a preset text matching model, and clustering query texts in the initial training sample, wherein the query texts are keywords input into the preset text matching model;

and according to the clustering result and the time stamp of each initial training sample, carrying out duplicate removal and correction on the negative samples in the initial training samples to obtain the target model training samples.

EXAMPLE five

This fifth embodiment provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a sample processing method according to any of the embodiments of the present invention, and the method includes:

acquiring an initial training sample of a preset text matching model, and clustering query texts in the initial training sample, wherein the query texts are keywords input into the preset text matching model;

and according to the clustering result and the time stamp of each initial training sample, carrying out duplicate removal and correction on the negative samples in the initial training samples to obtain the target model training samples.

Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer-readable storage medium may be, for example but not limited to: an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.

A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.

Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.

Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).

It will be understood by those skilled in the art that the modules or steps of the invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of computing devices, and optionally they may be implemented by program code executable by a computing device, such that it may be stored in a memory device and executed by a computing device, or it may be separately fabricated into various integrated circuit modules, or it may be fabricated by fabricating a plurality of modules or steps thereof into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.

It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:一种机器翻译结果的评估方法、装置、设备及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!