Semantic matching method and device, electronic equipment and storage medium

文档序号:8295 发布日期:2021-09-17 浏览:38次 中文

1. A semantic matching method, the method comprising:

acquiring a first text and at least two second texts to be matched with the first text;

based on text semantics, the first text is hit to each second text to obtain a first hit result;

based on text semantics, respectively hitting the second texts to the first texts to obtain second hit results;

determining a second text matched with the first text based on the first hit result and the second hit result.

2. The method of claim 1, further comprising:

acquiring the similarity between the first text and each second text;

establishing a similarity matrix taking the similarity as a matrix element;

and on the basis of the conversion processing of the similarity matrix, the first text is hit to each second text, and simultaneously the second texts are hit to the first text respectively, so that the first hit result and the second hit result are obtained.

3. The method of claim 2, further comprising:

pooling the similarity matrix by adopting a preset Gaussian core to obtain a pooled matrix;

and obtaining the first hit result and the second hit result based on a column vector set and a row vector set of the pooled matrix.

4. The method of claim 1, further comprising:

performing word segmentation on the first text and the target second text to obtain a first keyword contained in the first text and a second keyword contained in the target second text;

determining a union of the first keyword and the second keyword;

dividing the number of the keywords in the union set by the number of the first keywords to obtain a first hit result;

and dividing the number of the keywords in the union set by the number of the second keywords to obtain a second hit result.

5. The method of claim 1, further comprising:

inputting the first text and the second text into a preset machine learning model;

the first text is hit to each second text through the machine learning model, and a first hit result is obtained;

the second texts are hit to the first texts through the machine learning model respectively to obtain second hit results;

determining, by the machine learning model, a second text that matches the first text based on the first hit result and the second hit result.

6. The method of claim 5, wherein the machine learning model comprises an encoder layer, a hit processing layer, and a fully connected layer, wherein,

the encoder layer is configured to: coding the first text and the second text which are used as input to obtain a first code corresponding to the first text and a second code corresponding to the second text respectively;

the hit processing layer is configured to:

hitting the first codes to each second code to obtain a first hit result;

hitting the second codes to the first codes respectively to obtain second hit results;

the fully-connected layer is configured to: determining a second text matched with the first text based on the first hit result and the second hit result.

7. The method of claim 6, wherein BERT is used as the encoder layer.

8. A semantic matching apparatus, the apparatus comprising:

the acquisition module is configured to acquire a first text and at least two second texts to be matched with the first text;

the first hit module is configured to hit the first text to each second text based on text semantics to obtain a first hit result;

the second hit module is configured to hit the second texts to the first texts respectively based on text semantics to obtain second hit results;

a matching module configured to determine a second text matched with the first text based on the first hit result and the second hit result.

9. An electronic device, comprising:

a memory storing computer readable instructions;

a processor reading computer readable instructions stored by the memory to perform the method of any of claims 1-7.

10. A computer-readable storage medium having stored thereon computer-readable instructions which, when executed by a processor of a computer, cause the computer to perform the method of any one of claims 1-7.

Background

With the development of internet technology, semantic matching is often involved in various internet applications. The accuracy of semantic matching directly or indirectly determines how high or low the quality of service is provided by the application. In the prior art, the relevance between texts cannot be expressed carefully when semantic matching is carried out, so that the accuracy of semantic matching is low.

Disclosure of Invention

An object of the present application is to provide a semantic matching method, apparatus, electronic device, and storage medium, which can improve the accuracy of semantic matching.

According to an aspect of the embodiments of the present application, a semantic matching method is disclosed, the method comprising:

acquiring a first text and at least two second texts to be matched with the first text;

based on text semantics, the first text is hit to each second text to obtain a first hit result;

based on text semantics, respectively hitting the second texts to the first texts to obtain second hit results;

determining a second text matched with the first text based on the first hit result and the second hit result.

According to an aspect of the embodiments of the present application, a semantic matching apparatus is disclosed, the apparatus including:

the acquisition module is configured to acquire a first text and at least two second texts to be matched with the first text;

the first hit module is configured to hit the first text to each second text based on text semantics to obtain a first hit result;

the second hit module is configured to hit the second texts to the first texts respectively based on text semantics to obtain second hit results;

a matching module configured to determine a second text matched with the first text based on the first hit result and the second hit result.

In an exemplary embodiment of the present application, the apparatus is configured to:

acquiring the similarity between the first text and each second text;

establishing a similarity matrix taking the similarity as a matrix element;

and on the basis of the conversion processing of the similarity matrix, the first text is hit to each second text, and simultaneously the second texts are hit to the first text respectively, so that the first hit result and the second hit result are obtained.

In an exemplary embodiment of the present application, the apparatus is configured to:

pooling the similarity matrix by adopting a preset Gaussian core to obtain a pooled matrix;

and obtaining the first hit result and the second hit result based on a column vector set and a row vector set of the pooled matrix.

In an exemplary embodiment of the present application, the apparatus is configured to:

performing word segmentation on the first text and the target second text to obtain a first keyword contained in the first text and a second keyword contained in the target second text;

determining a union of the first keyword and the second keyword;

dividing the number of the keywords in the union set by the number of the first keywords to obtain a first hit result;

and dividing the number of the keywords in the union set by the number of the second keywords to obtain a second hit result.

In an exemplary embodiment of the present application, the apparatus is configured to:

inputting the first text and the second text into a preset machine learning model;

the first text is hit to each second text through the machine learning model, and a first hit result is obtained;

the second texts are hit to the first texts through the machine learning model respectively to obtain second hit results;

determining, by the machine learning model, a second text that matches the first text based on the first hit result and the second hit result.

In an exemplary embodiment of the present application, the apparatus is configured to:

coding the first text and the second text which are used as input to obtain a first code corresponding to the first text and a second code corresponding to the second text respectively;

hitting the first codes to each second code to obtain a first hit result;

hitting the second codes to the first codes respectively to obtain second hit results;

determining a second text matched with the first text based on the first hit result and the second hit result.

In an exemplary embodiment of the present application, the apparatus is configured to: BERT is used as the encoder layer.

According to an aspect of an embodiment of the present application, an electronic device is disclosed, including: a memory storing computer readable instructions; a processor reading computer readable instructions stored by the memory to perform the method of any of the preceding claims.

According to an aspect of embodiments of the present application, a computer program medium is disclosed, having computer readable instructions stored thereon, which, when executed by a processor of a computer, cause the computer to perform the method of any of the preceding claims.

According to an aspect of embodiments herein, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method provided in the various alternative implementations described above.

In the embodiment of the application, for the matched first text and the second text matched with the first text, the second text matched with the first text is determined based on the mutual and bidirectional hit processing process of the first text and the second text. By the method, matching deviation caused by unidirectional semantic matching is avoided, and correlation expression between texts is improved, so that the accuracy of semantic matching is improved.

Other features and advantages of the present application will be apparent from the following detailed description, or may be learned by practice of the application.

It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.

Drawings

The above and other objects, features and advantages of the present application will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings.

FIG. 1 illustrates a system architecture diagram for semantic matching according to one embodiment of the present application.

FIG. 2 shows a flow diagram of a semantic matching method according to an embodiment of the present application.

FIG. 3 shows a machine learning model structure diagram according to an embodiment of the present application.

Fig. 4 shows a block diagram of a semantic matching apparatus according to an embodiment of the present application.

FIG. 5 shows a hardware diagram of an electronic device according to an embodiment of the application.

Detailed Description

Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these example embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The drawings are merely schematic illustrations of the present application and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted.

Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more example embodiments. In the following description, numerous specific details are provided to give a thorough understanding of example embodiments of the present application. One skilled in the relevant art will recognize, however, that the subject matter of the present application can be practiced without one or more of the specific details, or with other methods, components, steps, and so forth. In other instances, well-known structures, methods, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the application.

Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.

The application provides a semantic matching method which is mainly applied to natural language processing in the field of artificial intelligence. For example: the semantic matching method provided by the application is applied to a search engine related to natural language processing, so that the search engine can provide search service for the terminal according to the semantic matching method.

Among them, Artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.

The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.

In the main relevance of the present application, Natural Language Processing (NLP) is an important direction in the fields of computer science and artificial intelligence. It studies various theories and methods that enable efficient communication between humans and computers using natural language. Natural language processing is a science integrating linguistics, computer science and mathematics. Therefore, the research in this field will involve natural language, i.e. the language that people use everyday, so it is closely related to the research of linguistics. Natural language processing techniques typically include text processing, semantic understanding, machine translation, robotic question and answer, knowledge mapping, and the like.

Machine Learning (ML) is a multi-domain cross discipline, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and formal education learning.

With the research and progress of artificial intelligence technology, the artificial intelligence technology is developed and applied in a plurality of fields, such as common smart homes, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned driving, automatic driving, unmanned aerial vehicles, robots, smart medical care, smart customer service, and the like.

Before the detailed description of the embodiments of the present application, a brief explanation of some concepts related to the present application will be provided.

The first text refers to the matched text, which is generally the text entered by the user in applications in the field of natural language processing. For example: a user inputs a text orange in a search box of a terminal, wherein the text orange is a first text; or, the user sends a voice "shaddock" to the voice assistant of the terminal, and the text "shaddock" obtained after the voice is converted into the text is the first text.

The second text refers to a text for matching with the first text to determine whether it matches with the first text. Wherein the number of the second texts is greater than or equal to two. For example: after obtaining the first text "orange" input by the user, the search engine respectively matches the second text "type of orange" and "which is most worth eating in the second text" fruit in autumn, "apple, pomegranate and orange" with the first text "orange" to determine which second text is matched with the first text "orange" in the two second texts.

The first keyword refers to a keyword included in the first text.

The second keyword refers to a keyword included in the second text.

FIG. 1 illustrates a system architecture diagram for semantic matching according to an embodiment of the present application.

Referring to fig. 1, in this embodiment, the semantic matching is mainly performed with respect to the server 10 and the terminal 20. The server 10 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a CDN, a big data and artificial intelligence platform, and the like. The terminal 20 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal and the server may be directly or indirectly connected through wired or wireless communication, and the application is not limited herein.

The terminal 20, as a collection end of the first text, collects original information including the first text information through interaction with the outside, and uploads the original information to the server 10. And after extracting the first text from the original information, the server 10 hits the first text to each preset second text to obtain a first hit result, hits the second text to the first text to obtain a second hit result, and determines the second text matched with the first text based on the first hit result and the second hit result.

The original information collected by the terminal 20 may be text information input by the outside through clicking or other interactive modes, or may be voice information input by the outside through voice.

It should be noted that the embodiment is only an exemplary illustration of the system architecture that can be adopted by the present application, and should not limit the function and scope of the embodiment of the present application.

It should be noted that, for the purpose of conveniently showing a specific implementation process of the embodiment in practical application, the subsequent embodiments are all shown in detail for an application scenario by "a search engine provides a search service to a terminal according to the semantic matching method provided by the present application". But does not represent that the application is applicable only to this application scenario.

FIG. 2 shows a flow diagram of a semantic matching method according to an embodiment of the present application.

Referring to fig. 2, a semantic matching method according to an embodiment of the present application includes:

step S310, acquiring a first text and at least two second texts to be matched with the first text;

step S320, based on the text semantics, the first text is hit to each second text to obtain a first hit result;

step S330, based on the text semantics, the second text is hit to the first text respectively to obtain a second hit result;

step S340, determining a second text matched with the first text based on the first hit result and the second hit result.

In the embodiment of the application, for the matched first text and the second text matched with the first text, the second text matched with the first text is determined based on the mutual and bidirectional hit processing process of the first text and the second text. By the method, matching deviation caused by unidirectional semantic matching is avoided, and correlation expression between texts is improved, so that the accuracy of semantic matching is improved.

In an embodiment, after the target voice to be matched is acquired, the first text is obtained by converting the target voice into a text.

In this embodiment, after obtaining the target speech from the terminal, the search engine converts the target speech into a text through a speech-to-text component (e.g., a pre-trained language model component), so as to obtain a first text.

In one embodiment, the hit processing is performed by way of keyword hits.

In this embodiment, the search engine performs word segmentation on the first text and each second text, to obtain keywords included in each text. There may be a plurality of keywords included in the text.

Aiming at the target second text which is currently subjected to hit processing, the search engine determines a union set of a first keyword contained in the first text and a second keyword contained in the second text. And then, the first text is hit to the target second text in a mode of dividing the number of the keywords in the union set by the number of the first keywords, and the obtained numerical value is a first hit result. And the target second text is hit to the first text in a mode of dividing the number of the keywords in the union set by the number of the second keywords, and the obtained numerical value is a second hit result.

The hit results obtained in this way are equal to or greater than 0 and equal to or less than 1, and reflect the hit degree.

For example: and the two second texts for matching with the first text are respectively marked as a second text A and a second text B. The first keywords included in the first text include "orange" and "nutrition", the second keywords included in the second text a include "orange" and "place of origin", and the second keywords included in the second text B include "orange", "grapefruit" and "autumn".

For the second text a, the number of the second keywords is 2, and the union of the second keywords and the first keywords is "orange". Thus, it is determined that a hit of the first text on the second text a results in a first hit 1/2-0.50 and a hit of the second text a on the first text results in a second hit 1/2-0.50.

For the second text B, the number of the second keywords is 3, and the union of the second keywords and the first keywords is "orange". Thus, it is determined that a hit of the first text on the second text B results in a first hit 1/2-0.50, and a hit of the second text B on the first text results in a second hit 1/3-0.33.

The average value of the hit results may be used as the matching degree between the corresponding second text and the corresponding first text, and the matching degree between the second text a and the corresponding first text is (0.50+ 0.50)/2-0.50, and the matching degree between the second text B and the corresponding first text is (0.50+ 0.33)/2-0.415. It follows that the second text a is a better match to the first text than the second text B. And if the second text with the highest matching degree is taken as the second text matched with the first text, determining the second text A as the second text matched with the first text.

This embodiment has the advantage that bi-directional hits between texts are achieved quickly and efficiently by hit processing of keywords in the lexical dimension.

In one embodiment, the similarity matrix is used to simultaneously realize the two-way hit processing between the first text and the second text.

In this embodiment, the search engine obtains the similarity between the first text and each of the second texts. The similarity can be the semantic similarity between the first text and the corresponding second text in the full-text dimension of the text; the semantic similarity between the keywords contained in the first text and the keywords contained in the corresponding second text in the dimension of the words composing the text may also be considered.

And then establishing a similarity matrix taking the similarity as a matrix element. When the row of the similarity matrix represents a first text, the column of the similarity matrix represents a second text; when the row of the similarity matrix represents the second text, the column of the similarity matrix represents the first text.

And then performing matrix transformation on the similarity matrix, hitting the first text to each second text, and simultaneously hitting the second texts to the first text respectively to obtain a first hit result and a second hit result.

In one embodiment, the two second texts for matching with the first text are respectively denoted as a second text a and a second text B. The first text comprises two keywords which are respectively marked as q1 and q 2; the second text A comprises three keywords which are respectively marked as d1, d2 and d 3; the second text B contains two keywords, denoted d4 and d 5.

And respectively carrying out vectorization processing on each keyword to obtain a word vector corresponding to each keyword. And then the vector distance between the word vectors is used as the similarity between the keywords. Further, the obtained similarity is used as a matrix element to create similarity matrices MA and MB as shown below.

In the similarity matrix MA, the first line and the second line represent q1 and q2 of the first text, respectively; the first, second and third columns represent d1, d2, d3, respectively, of the second text a.

In the similarity matrix MB, the first and second lines represent q1 and q2 of the first text, respectively; the first and second columns represent d4 and d5, respectively, of the second text B.

q1d1 represents the similarity between q1 and d1, q1d2 represents the similarity between q1 and d2, and the meanings represented by other matrix elements are not repeated herein.

The following reference matrices MA0 and MB0 are established for MA and MB, respectively.

Furthermore, by transforming the similarity matrix MA based on the reference matrix MA0, the second text a is hit to the first text a while the first text is hit to the second text a. Similarly, the similarity matrix MB is transformed based on the reference matrix MB0, so that the second text B is hit into the first text while the first text is hit into the second text B.

The bidirectional hit processing can be realized by determining the similarity between the similarity matrix and the corresponding reference matrix. Specifically, the similarity between the similarity matrix MA and the reference matrix MA0 is determined, so that bidirectional hit processing for the second text a is realized; by determining the similarity between the similarity matrix MB and the reference matrix MB0, bidirectional hit processing for the second text B is achieved.

The bidirectional hit processing can also be realized by combining the row vectors and the column vectors of the similarity matrix, and further matching the combined row vectors or the combined column vectors of the corresponding reference matrix. Specifically, for the similarity matrix MA, the row vectors (q1d1, q1d2, q1d3) and the row vectors (q2d1, q2d2, q2d3) are merged to obtain merged row vectors (q1d1+ q2d1, q1d2+ q2d2, q1d3+ q2d3) of the similarity matrix MA. And combining the column vectors (q1d1, q2d1), the column vectors (q1d2, q2d2) and the column vectors (q1d3, q2d3) to obtain combined column vectors (q1d1+ q1d2+ q1d3, q2d1+ q2d2+ q2d3) of the similarity matrix MA. By matching the merged row vector (q1d1+ q2d1, q1d2+ q2d2, q1d3+ q2d3) of the similarity matrix MA with the merged row vector (2,2,2) of the reference matrix MA0, and matching the merged column vector (q1d1+ q1d2+ q1d3, q2d1+ q2d2+ q2d3) of the similarity matrix MA with the merged column vector (3,3,3) of the reference matrix MA0, a bidirectional hit processing for the second text a is realized. Similarly, the bidirectional hit processing for the second text B is not described herein again.

The embodiment has the advantage that semantic information is enriched by realizing bidirectional hit between texts on the matrix dimension, thereby improving the accuracy of semantic matching.

In one embodiment, the similarity matrix is pooled to simultaneously achieve two-way hit processing between the first text and the second text.

In this embodiment, the search engine performs pooling processing on the similarity matrix by using a preset gaussian kernel to obtain a pooled matrix. Compared with a similarity matrix before pooling treatment, the matrix after pooling improves the perception field of view and reduces the optimization difficulty.

When the row of the matrix after pooling represents a first text, obtaining a first hit result based on the row vector set of the matrix after pooling, and obtaining a second hit result based on the column vector set of the matrix after pooling; and when the row of the matrix after pooling represents a second text, obtaining a first hit result based on the column vector set of the matrix after pooling, and obtaining a second hit result based on the row vector set of the matrix after pooling.

In an embodiment, a plurality of gaussian kernels are used to pool the similarity matrix respectively, so as to obtain a first hit result and a second hit result under a plurality of scales.

The embodiment has the advantages that through hit processing under multiple scales, semantic information is enriched, and therefore the accuracy of semantic matching is improved.

In one embodiment, the semantic matching method provided by the present application is implemented by a machine learning model.

In this embodiment, the machine learning model is obtained by pre-training. The machine learning model takes a first text and each second text as input, through the processing of model parameters obtained by pre-training, the first text is hit to each second text to obtain a first hit result, the second texts are hit to the first text respectively to obtain a second hit result, and then on the basis of the first hit result and the second hit result, the second text matched with the first text is determined.

The embodiment has the advantages that the semantic matching method provided by the application is realized through the machine learning model, and the processing integration level and modularization of semantic matching are improved.

In one embodiment, a machine learning model for implementing the semantic matching method proposed in the present application includes an encoder layer, a hit processing layer, and a full connection layer.

In this embodiment, the encoder layer is configured to perform encoding processing on a text of the input model to obtain a first code corresponding to the first text and a second code corresponding to the second text.

The first encoding and the second encoding output by the encoder layer are input to the hit processing layer. The hit processing layer is used for realizing bidirectional hit processing between the first text and the second text in parallel. Hitting the first codes to each second code to obtain a first hit result; and the second codes are hit to the first codes respectively to obtain second hit results.

The first hit result and the second hit result output by the hit processing layer are input into the full connection layer. The full connection layer is used for determining a second text matched with the first text based on the first hit result and the second hit result. Specifically, the full connection layer classifies the first hit result and the second hit result through a classifier (for example, a softmax function), outputs the probability that each second text is matched with the first text, and determines the second text matched with the first text according to the probability.

In one embodiment, BERT is employed as the encoder layer in the machine learning model proposed in the present application.

It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.

In one embodiment, a neural network that performs a similarity matrix by matrixing is used as a hit processing layer in the machine learning model proposed in the present application.

FIG. 3 shows a machine learning model structure diagram according to an embodiment of the present application.

Referring to fig. 3, in this embodiment, the machine learning model used to implement semantic matching includes a decoder layer composed of BERT, a hit processing layer composed of a neural network that matriculates to implement a similarity matrix, and a fully-connected layer that outputs probabilities through a classifier.

In this embodiment, the first text Query (q1 to qm shown in the figure) and the second text Doc (d 1 to dn shown in the figure) of the input BERT are distinguished by a separator SEP. By performing encoding processing on the input text, BERT outputs a token sequence for each word in the text.

The hit processing layer averages token sequences to obtain token embedding of each word, a similarity matrix is further established on the basis, and pooling processing is carried out according to a preset Gaussian kernel.

The row and column vectors obtained after pooling are input to the fully connected layer. And under the processing of the classifier, outputting the probability of matching the second text Doc with the first text Query, and further determining the second text Doc matched with the first text Query on the basis.

It should be noted that the embodiment is only an exemplary illustration, and should not limit the function and the scope of the disclosure.

Fig. 4 shows a semantic matching apparatus according to an embodiment of the present application, the apparatus comprising:

an obtaining module 410 configured to obtain a first text and at least two second texts to be matched with the first text;

a first hit module 420, configured to hit the first text to each of the second texts based on text semantics to obtain a first hit result;

a second hit module 430, configured to hit the second text to the first text, respectively, based on text semantics, to obtain a second hit result;

a matching module 440 configured to determine a second text that matches the first text based on the first hit result and the second hit result.

In an exemplary embodiment of the present application, the apparatus is configured to:

acquiring the similarity between the first text and each second text;

establishing a similarity matrix taking the similarity as a matrix element;

and on the basis of the conversion processing of the similarity matrix, the first text is hit to each second text, and simultaneously the second texts are hit to the first text respectively, so that the first hit result and the second hit result are obtained.

In an exemplary embodiment of the present application, the apparatus is configured to:

pooling the similarity matrix by adopting a preset Gaussian core to obtain a pooled matrix;

and obtaining the first hit result and the second hit result based on a column vector set and a row vector set of the pooled matrix.

In an exemplary embodiment of the present application, the apparatus is configured to:

performing word segmentation on the first text and the target second text to obtain a first keyword contained in the first text and a second keyword contained in the target second text;

determining a union of the first keyword and the second keyword;

dividing the number of the keywords in the union set by the number of the first keywords to obtain a first hit result;

and dividing the number of the keywords in the union set by the number of the second keywords to obtain a second hit result.

In an exemplary embodiment of the present application, the apparatus is configured to:

inputting the first text and the second text into a preset machine learning model;

the first text is hit to each second text through the machine learning model, and a first hit result is obtained;

the second texts are hit to the first texts through the machine learning model respectively to obtain second hit results;

determining, by the machine learning model, a second text that matches the first text based on the first hit result and the second hit result.

In an exemplary embodiment of the present application, the apparatus is configured to:

coding the first text and the second text which are used as input to obtain a first code corresponding to the first text and a second code corresponding to the second text respectively;

hitting the first codes to each second code to obtain a first hit result;

hitting the second codes to the first codes respectively to obtain second hit results;

determining a second text matched with the first text based on the first hit result and the second hit result.

In an exemplary embodiment of the present application, the apparatus is configured to: BERT is used as the encoder layer.

An electronic device 50 according to an embodiment of the present application is described below with reference to fig. 5. The electronic device 50 shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.

As shown in fig. 5, electronic device 50 is embodied in the form of a general purpose computing device. The components of the electronic device 50 may include, but are not limited to: the at least one processing unit 510, the at least one memory unit 520, and a bus 530 that couples various system components including the memory unit 520 and the processing unit 510.

Wherein the storage unit stores program code that is executable by the processing unit 510 to cause the processing unit 510 to perform steps according to various exemplary embodiments of the present invention as described in the description part of the above exemplary methods of the present specification. For example, the processing unit 510 may perform the various steps as shown in fig. 2.

The memory unit 520 may include a readable medium in the form of a volatile memory unit, such as a random access memory unit (RAM)5201 and/or a cache memory unit 5202, and may further include a read only memory unit (ROM) 5203.

Storage unit 520 may also include a program/utility 5204 having a set (at least one) of program modules 5205, such program modules 5205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.

Bus 530 may be one or more of any of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.

The electronic device 50 may also communicate with one or more external devices 600 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 50, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 50 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 550. An input/output (I/O) interface 550 is connected to the display unit 540. Also, the electronic device 50 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 560. As shown, the network adapter 560 communicates with the other modules of the electronic device 50 over the bus 530. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with electronic device 50, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.

Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present application can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to make a computing device (which can be a personal computer, a server, a terminal device, or a network device, etc.) execute the method according to the embodiments of the present application.

In an exemplary embodiment of the present application, there is also provided a computer-readable storage medium having stored thereon computer-readable instructions which, when executed by a processor of a computer, cause the computer to perform the method described in the above method embodiment section.

According to an embodiment of the present application, there is also provided a program product for implementing the method in the above method embodiment, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.

The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.

A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.

Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.

Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as JAVA, C + +, or the like, as well as conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).

It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the application. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.

Moreover, although the steps of the methods herein are depicted in the drawings in a particular order, this does not require or imply that the steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.

Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present application can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which can be a personal computer, a server, a mobile terminal, or a network device, etc.) to execute the method according to the embodiments of the present application.

Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:语义工程平台的构建方法及语义工程平台

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!