Named entity identification method and device, storage medium and electronic equipment

文档序号:7674 发布日期:2021-09-17 浏览:34次 中文

1. A named entity recognition method, comprising:

acquiring a text to be identified;

performing bidirectional semantic representation on the text to be recognized based on the word vector and the word vector of the text to be recognized to obtain a feature vector of the text to be recognized;

and obtaining a named entity recognition result based on the feature vector.

2. The method for identifying the named entity according to claim 1, wherein the performing bi-directional semantic representation on the text to be identified based on the vector and the word vector of the text to be identified to obtain the feature vector of the text to be identified comprises:

performing initial semantic representation on the text to be recognized by using a preset BERT semantic representation layer to obtain a word vector and a word vector of the text to be recognized;

and inputting the word vectors and the word vectors of the text to be recognized into a preset BiLattice-LSTM layer, and performing bidirectional semantic representation on the text to be recognized to obtain the feature vectors of the text to be recognized.

3. The method for identifying the named entity according to claim 2, wherein the performing initial semantic representation on the text to be identified by using a preset BERT semantic representation layer to obtain a word vector and a word vector of the text to be identified comprises:

extracting characters and words in the text to be recognized;

and training the extracted characters and words by using a pre-trained BERT model to obtain character vectors and word vectors of the text to be recognized.

4. The named entity recognition method of claim 3, wherein the extracting words and phrases in the text to be recognized comprises:

cutting each Chinese character in the text to be recognized into parts by utilizing a BIO labeling form, and extracting characters in the text to be recognized;

and segmenting words of the text to be recognized by using a bus segmentation tool and a pre-constructed dictionary, and extracting words in the text to be recognized.

5. The method according to claim 2, wherein the step of inputting the word vectors and the word vectors of the text to be recognized into a preset Bilattice-LSTM layer, and performing bidirectional semantic representation on the text to be recognized to obtain the feature vectors of the text to be recognized comprises the steps of:

training the word vectors and the word vectors of the text to be recognized by adopting a BilSTM model to obtain a feature vector fusing context information;

the LSTM structure in the BiLSTM model is a Lattice LSTM structure, the Lattice LSTM structure at least comprises a gate unit at a word level, and words obtained through different paths are sent to corresponding characters through the gate unit.

6. The method according to claim 1, wherein obtaining a named entity recognition result based on the feature vector comprises:

inputting the feature vector into a preset attention mechanism layer to obtain a corresponding attention weight vector;

and optimizing the feature vector to obtain a named entity recognition result based on a preset conditional random field model and the attention weight vector.

7. A named entity recognition apparatus, comprising:

the acquisition module is used for acquiring a text to be recognized;

the training module is used for performing bidirectional semantic representation on the text to be recognized based on the word vector and the word vector of the text to be recognized to obtain a feature vector of the text to be recognized;

and the optimization module is used for obtaining a named entity recognition result based on the feature vector.

8. The named entity recognition device of claim 7, wherein the training module comprises:

the first representation module is used for performing initial semantic representation on the text to be recognized by utilizing a preset BERT semantic representation layer to obtain a word vector and a word vector of the text to be recognized;

and the second characterization module is used for inputting the word vectors and the word vectors of the text to be recognized into a preset BiLattice-LSTM layer, and performing bidirectional semantic characterization on the text to be recognized to obtain the feature vectors of the text to be recognized.

9. A storage medium having stored thereon a computer program which, when executed by one or more processors, implements the method of any one of claims 1 to 6.

10. An electronic device, comprising a memory and a processor, the memory having stored thereon a computer program that, when executed by the processor, implements the method of any of claims 1-6.

Background

Named Entity Recognition (NER) is a common task in natural language processing, and is widely applied in daily life along with the development of artificial intelligence. For example, when information that is needed and important by a user needs to be obtained from a piece of news or words, named entity recognition technology can be used for obtaining the information, and the named entity recognition technology is helpful for quickly retrieving needed key information from the text.

The named entity recognition technology can automatically recognize entity information such as names of people, organizations, place names, time and the like, is very important for acquiring text semantic knowledge, is the basis of technologies such as event or relationship extraction and the like, and has a significant effect on unstructured information extraction.

At present, the recognition effect of the Chinese named entity is poorer than that of the English named entity, and the recognized Chinese named entity is often inaccurate. Therefore, there is a need in the art for a technical solution for effectively improving the recognition effect of the named entity in chinese.

Disclosure of Invention

In order to solve the problem of low recognition accuracy of the Chinese named entity, the invention provides a named entity recognition method, a named entity recognition device, a storage medium and electronic equipment.

In a first aspect, an embodiment of the present invention provides a method for identifying a named entity, including:

acquiring a text to be identified;

performing bidirectional semantic representation on the text to be recognized based on the word vector and the word vector of the text to be recognized to obtain a feature vector of the text to be recognized;

and obtaining a named entity recognition result based on the feature vector.

In some embodiments, the performing bidirectional semantic representation on the text to be recognized based on the vector and the word vector of the text to be recognized to obtain the feature vector of the text to be recognized includes:

performing initial semantic representation on the text to be recognized by using a preset BERT semantic representation layer to obtain a word vector and a word vector of the text to be recognized;

and inputting the word vectors and the word vectors of the text to be recognized into a preset BiLattice-LSTM layer, and performing bidirectional semantic representation on the text to be recognized to obtain the feature vectors of the text to be recognized.

In some embodiments, the performing initial semantic representation on the text to be recognized by using a preset BERT semantic representation layer to obtain a word vector and a word vector of the text to be recognized includes:

extracting characters and words in the text to be recognized;

and training the extracted characters and words by using a pre-trained BERT model to obtain character vectors and word vectors of the text to be recognized.

In some embodiments, the extracting words and phrases in the text to be recognized includes:

cutting each Chinese character in the text to be recognized into parts by utilizing a BIO labeling form, and extracting characters in the text to be recognized;

and segmenting words of the text to be recognized by using a bus segmentation tool and a pre-constructed dictionary, and extracting words in the text to be recognized.

In some embodiments, the inputting the word vector and the word vector of the text to be recognized into a preset Bilattice-LSTM layer, and performing bidirectional semantic representation on the text to be recognized to obtain the feature vector of the text to be recognized includes:

training the word vectors and the word vectors of the text to be recognized by adopting a BilSTM model to obtain a feature vector fusing context information;

the LSTM structure in the BiLSTM model is a Lattice LSTM structure, the Lattice LSTM structure at least comprises a gate unit at a word level, and words obtained through different paths are sent to corresponding characters through the gate unit.

In some embodiments, the deriving a named entity recognition result based on the feature vector includes:

inputting the feature vector into a preset attention mechanism layer to obtain a corresponding attention weight vector;

and optimizing the feature vector to obtain a named entity recognition result based on a preset conditional random field model and the attention weight vector.

In a second aspect, an embodiment of the present invention provides a named entity identifying device, including:

the acquisition module is used for acquiring a text to be recognized;

the training module is used for performing bidirectional semantic representation on the text to be recognized based on the word vector and the word vector of the text to be recognized to obtain a feature vector of the text to be recognized;

and the optimization module is used for obtaining a named entity recognition result based on the feature vector.

In some embodiments, the training module comprises:

the first representation module is used for performing initial semantic representation on the text to be recognized by utilizing a preset BERT semantic representation layer to obtain a word vector and a word vector of the text to be recognized;

and the second characterization module is used for inputting the word vectors and the word vectors of the text to be recognized into a preset BiLattice-LSTM layer, and performing bidirectional semantic characterization on the text to be recognized to obtain the feature vectors of the text to be recognized.

In a third aspect, an embodiment of the present invention provides a storage medium, on which a computer program is stored, and when the computer program is executed by one or more processors, the method according to the first aspect is implemented.

In a fourth aspect, an embodiment of the present invention provides an electronic device, which includes a memory and a processor, where the memory stores a computer program, and the computer program, when executed by the processor, implements the method according to the first aspect.

Compared with the prior art, one or more embodiments of the invention can bring about at least the following beneficial effects:

the invention provides a named entity recognition method, a device, a storage medium and electronic equipment. The method has the advantages of accurate identified Chinese named entity, good identification effect and effective improvement of the identification effect of the Chinese named entity.

Drawings

In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.

Fig. 1 is a flowchart of a named entity recognition method according to an embodiment of the present invention;

FIG. 2 is a flowchart of another named entity recognition method according to an embodiment of the present invention;

FIG. 3 is a flowchart of another named entity recognition method provided by embodiments of the present invention;

FIG. 4 is a flowchart of another named entity recognition method provided by embodiments of the present invention;

FIG. 5 is a schematic diagram of an implementation process of an application example provided by an embodiment of the present invention;

fig. 6 is a schematic diagram of a Bilattice-LSTM layer of an application example provided by the embodiment of the invention;

fig. 7 is a schematic diagram of a Bilattice-LSTM model of an application example provided by the embodiment of the present invention;

FIG. 8 is a block diagram of a named entity recognition apparatus according to an embodiment of the present invention;

fig. 9 is a block diagram of another named entity recognition apparatus according to an embodiment of the present invention.

Detailed Description

The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.

In the related art, the early named entity recognition is mainly based on rules and statistics, and the methods mainly use lexical, syntactic and semantic rule templates manually set by linguists for named entity recognition, so that the effect is not ideal. However, following the development of Deep Learning (DL), models such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs) and the like are applied more and more, however, the named entity recognition scheme in the related art has a poor recognition effect and the accuracy of named entity recognition is low.

Therefore, the embodiment of the invention provides a named entity identification method, a named entity identification device, a storage medium and electronic equipment, which improve the accuracy of Chinese named entity identification so as to obtain a better identification effect. Several embodiments of the invention are described below.

Example one

Fig. 1 shows a flowchart of a named entity recognition method, and as shown in fig. 1, the embodiment provides a named entity recognition method, which includes steps S110 to S130:

and step S110, acquiring a text to be recognized.

In practical application, external Material purchasing completely depends on manual disassembly of the air-conditioning wiring diagram to form a Material list (BOM) of a verification price, the process depends on a large amount of manual operation, and time and labor are consumed, so that the raw Material extraction and the usage statistics in the air-conditioning wiring diagram can be completed by combining the advanced technologies of artificial intelligent semantic analysis, picture analysis and the like. Therefore, the text to be recognized in the embodiment can be an air conditioner wiring diagram, and by the method, the named entity in the air conditioner wiring diagram is recognized, so that raw materials and corresponding usage can be accurately extracted, and an accurate bill of materials is formed for price checking. It is to be understood that the text to be recognized may be any unstructured text and is not limited to the air conditioner wiring diagram.

And S120, performing bidirectional semantic representation on the text to be recognized based on the word vector and the word vector of the text to be recognized to obtain a feature vector of the text to be recognized.

Compared with the semantic representation based on the word vector, in the embodiment, the bidirectional semantic representation is performed on the text to be recognized based on the word vector and the word vector of the text to be recognized, so that the potential word information in the text to be recognized can be fused, the word information is effectively utilized, the definition of entity segmentation errors is avoided, the error analysis in the named entity recognition is avoided, and the feature vector of the text to be recognized is accurately obtained.

In some embodiments, as shown in fig. 2, in step S120, performing bidirectional semantic representation on the text to be recognized based on the vector and the word vector of the text to be recognized to obtain a feature vector of the text to be recognized, including steps S121 to S122:

step S121, performing initial semantic representation on the text to be recognized by using a preset BERT (bidirectional Encoder Representations from transformations) semantic representation layer to obtain word vectors and word vectors of the text to be recognized.

In practical application, a text of a technical requirement part of an air conditioner wiring diagram is used as a text to be recognized and decoded, a BERT model has strong semantic representation capability, the decoding part performs semantic representation by using a BERT model trained in advance in a preset BERT semantic representation layer, and respectively trains word vectors and word vectors, so that words and words of the text to be recognized are converted into the forms of the word vectors and the word vectors for subsequent bidirectional semantic representation, and feature vectors of the text to be recognized are obtained.

In some cases, as shown in fig. 3, in step S121, performing initial semantic representation on the text to be recognized by using a preset BERT semantic representation layer to obtain a word vector and a word vector of the text to be recognized, including step S121-1 to step S121-2:

and S121-1, extracting characters and words in the text to be recognized.

Further, the step S121-1 of extracting characters and words in the text to be recognized includes:

a. and utilizing a BIO (B-begin, I-inside, O-outside) labeling form to cut each Chinese character in the text to be recognized, and extracting the characters in the text to be recognized.

b. And segmenting words of the text to be recognized by using a bus segmentation tool and a pre-constructed dictionary, and extracting words in the text to be recognized.

And S121-2, training the extracted characters and words by using a pre-trained BERT model to obtain character vectors and word vectors of the text to be recognized.

And S122, inputting the word vectors and the word vectors of the text to be recognized into a preset BiLattice-LSTM layer, and performing bidirectional semantic representation on the text to be recognized to obtain the feature vectors of the text to be recognized.

In practical application, a preset Bilattice-LSTM model preset in a preset Bilattice-LSTM layer is utilized, further semantic representation is carried out on the basis of word vectors and word vectors of the text to be recognized, information of words in the text to be recognized is effectively learned by fully utilizing the information of the words, and the feature vector of the text to be recognized represented by semantic coding is obtained.

Step S122, inputting the word vector and the word vector of the text to be recognized into a preset Bilattice-LSTM layer, and performing bidirectional semantic representation on the text to be recognized to obtain a feature vector of the text to be recognized, including:

step S122-1, training word vectors and word vectors of the text to be recognized by adopting a BiLSTM (bidirectional LSTM) model to obtain a feature vector fusing context information.

Bi-LSTM (Bi-directional long-short term memory) is a bidirectional long-short term memory network formed by combining forward LSTM and backward LSTM, and BILSTM can well capture bidirectional semantic dependence. The LSTM structure in the BiLSTM model of this embodiment is a Lattice LSTM (long short term memory network) structure, the Lattice LSTM structure at least includes a gate unit at a word level, and words obtained through different paths are sent to corresponding characters through the gate unit.

For a named entity recognition task, a traditional neural network model is difficult to utilize context information, and in order to obtain continuously characterized context information, a BilSTM model is adopted for training to obtain a text feature vector of information fused with context features, so that bidirectional semantic characterization is realized. On the basis, the LSTM structure in the BilL TM model adopts a Lattice LSTM structure at least comprising a gate unit at a word level to form a BilL atte-LSTM model, and words obtained by different paths can be sent to corresponding characters through the Bilatte-LSTM model, so that context information can be fully fused, and potential word information can be fused in the bidirectional semantic representation process.

In practical application, a letter-word structure can be constructed by matching a sentence with a preset dictionary, so that ambiguity of potential named entities in a text to be recognized can be eliminated, the letter-word in the letter LSTM structure has an exponential path, and therefore the flow of information can be automatically controlled from the beginning of the sentence to the end of the sentence by constructing the letter LSTM structure.

And S130, obtaining a named entity recognition result based on the feature vector.

In some embodiments, as shown in fig. 4, the obtaining of the named entity recognition result based on the feature vector in step S130 includes steps S131 to S132:

and S131, inputting the feature vector into a preset attention mechanism layer to obtain a corresponding attention weight vector.

In practical application, the feature vector represented by the code obtained by the Bilattice-LSTM layer is difficult to capture the semantic weight vector of the important word, so that the feature vector is input into a preset attention mechanism layer, the weight vector of the important word is focused by using the preset attention mechanism of the preset attention mechanism layer, the attention mechanism layer can fully identify the weight of the relevant important word, the input carried information is fully utilized, and the named entity identification effect is effectively improved after the weight of the important word is focused.

And S132, optimizing the feature vector based on the preset condition random field model and the attention weight vector to obtain an optimal feature vector as a named entity recognition result.

The conditional random field model combines the characteristics of a maximum entropy model and a hidden Markov model, has the capability of expressing long-distance dependency and overlapping characteristics, and has the advantages of better solving the problems of labeling (classification) bias and the like.

In the embodiment, the two-way semantic representation is performed on the acquired text to be recognized based on the word vector and the word vector of the text to be recognized to obtain the feature vector of the text to be recognized, so that the recognition result of the named entity is obtained based on the feature vector, the potential word information can be fused into the entity naming recognition process, the word information is effectively utilized, the definition of entity segmentation errors is avoided, the error analysis in the named entity recognition is avoided, and the feature vector of the text to be recognized is accurately obtained.

The technical solution of the method is further explained below with reference to an application example.

The scheme is a named entity identification method based on BiLattice-LSTM, semantic coding is carried out by utilizing the strong semantic representation capability of a BERT model, word vectors and word vectors are respectively trained, and initial semantic representation is completed. And then inputting the word vectors and the word vectors into a BiLattice-LSTM layer for further semantic coding, fusing potential vocabulary information into a BiLattice-LSTM model of the BiLattice-LSTM layer, inputting the potential vocabulary information into an attention mechanism layer to obtain weight vectors of text attention (important words), and finally inputting the weight vectors into a CRF layer to obtain the coded optimal feature vectors. As shown in fig. 5, the method is implemented as follows:

first, the BERT semantic representation layer: texts to be recognized (such as texts of technical requirement parts of air-conditioning wiring diagrams) X1, X2 and … … Xn are input into a BERT semantic representation layer through a text input layer, the texts of the technical requirement parts of the wiring diagrams are decoded in the BERT semantic representation layer, a decoding part carries out semantic representation by utilizing a BERT model, and words of the texts are converted into a vector form.

Wherein: dividing each Chinese character into characters by using a BIO labeling form, performing word segmentation by using a word segmentation tool and a pre-constructed dictionary to obtain words, completing extraction of the characters and the words in the text to be recognized, and then performing training of word vectors and word vectors by using a BERT model respectively to obtain the word vectors and the word vectors represented by the text.

Secondly, the BiLattice-LSTM layer: and inputting the word vector and the word vector into a Bilattice-LSTM layer to obtain further representation of the text to be recognized. Compared with the single character input representation in the related technology, the Bilattice-LSTM can effectively avoid the situation of entity segmentation errors by using the information of the vocabulary, and can effectively avoid the influence of analysis errors. Compared with the traditional LSTM model, the Bilattice-LSTM model can more effectively learn word information in the text to be recognized and obtain the feature vector represented by semantic coding. The schematic diagram of the Bilattic-LSTM layer is shown in FIG. 6, the LSTM structure in the Bilattic model adopts the LSTM structure in the Bilattic model as a Lattice LSTM structure, so as to form a forward Lattice LSTM (forward Lattice LSTM) and a backward Lattice LSTM (backward Lattice LSTM) between the input and the output, the forward Lattice LSTM and the backward Lattice LSTM form a Bilttice structure, that is, the Bilattic-LSTM model of the method, utilizes the LSTM of the Lattice structure to blend the potential word information in the sentence into the LSTM of the word granularity, constructs the Lattice LSTM structure, can automatically control the information flow from the beginning of the sentence to the end of the sentence, and can eliminate the ambiguity of the potential named entity in the text to be identified. As shown in fig. 7, the Lattice LSTM structure at least includes a gate unit (cell structure) at a word level, and words obtained through different paths are sent to corresponding characters through the gate unit c. For example, based on the characters shown in fig. 7, all word-level information, such as, for example, west' an, sports, stadium, games, etc., is fused. And dynamically transmitting the words obtained by different paths to corresponding characters through the gate unit. Training on the NER data, Lattice LSTM can automatically find more word information in the text to achieve better NER performance. In contrast to NER, which relies solely on word information, Lattice LSTM is able to protect against word segmentation errors and apply explicit word information to word sequence tags.

Again, attention is drawn to the mechanical layer: in order to capture the semantic weight vector of the important word, after the feature vector represented by the code is obtained in the Bilattice-LSTM layer, the attention mechanism is utilized to focus on the weight vector of the important word, and the information carried by the feature vector is fully utilized. Because each word in the named entity recognition may have different contributions to the correct recognition of the named entity, an attention mechanism layer is added after the Bilattice-LSTM layer, so that the weight of the important word can be accurately captured, the recognition accuracy is improved, and the named entity recognition effect can be effectively improved after the weight of the important word is focused.

Finally, the CRF layer: the conditional random field model combines the characteristics of a maximum entropy model and a hidden Markov model, inputs the weight vector and the feature vector determined by an attention mechanism layer into a CRF layer, and can obtain a global optimal solution, namely, optimize the global feature vector as a named entity recognition result.

Example two

Fig. 8 is a block diagram of a named entity recognition apparatus, and as shown in fig. 8, the embodiment provides a named entity recognition apparatus, including:

an obtaining module 710, configured to obtain a text to be recognized;

the training module 720 is configured to perform bidirectional semantic representation on the text to be recognized based on the word vector and the word vector of the text to be recognized, so as to obtain a feature vector of the text to be recognized;

and the optimizing module 730 is configured to obtain a named entity recognition result based on the feature vector.

FIG. 9 illustrates another named entity recognition apparatus block diagram, as shown in FIG. 9, in some embodiments, the training module 720 includes:

the first representation module 721 is configured to perform initial semantic representation on the text to be recognized by using a preset BERT semantic representation layer, so as to obtain a word vector and a word vector of the text to be recognized.

The second characterization module 722 is configured to input the word vectors and the word vectors of the text to be recognized into the preset Bilattice-LSTM layer, and perform bidirectional semantic characterization on the text to be recognized to obtain feature vectors of the text to be recognized.

It is to be understood that the obtaining module 710 may be configured to execute the step S110 in the first embodiment, the training module 720 may be configured to execute the step S120 in the first embodiment, and the optimizing module 730 may be configured to execute the step S130 in the first embodiment.

It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed over a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, or they may be separately fabricated into various integrated circuit modules, or multiple modules or steps thereof may be fabricated into a single integrated circuit module. The present invention is not limited to any specific combination of hardware and software.

EXAMPLE III

The present embodiment provides a storage medium, on which a computer program is stored, and when the computer program is executed by one or more processors, the method of the first embodiment is implemented.

In this embodiment, the storage medium may be implemented by any type of volatile or nonvolatile storage device or combination thereof, such as a Static Random Access Memory (SRAM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), an Erasable Programmable Read-Only Memory (EPROM), a Programmable Read-Only Memory (PROM), a Read-Only Memory (ROM), a magnetic Memory, a flash Memory, a magnetic disk, or an optical disk. The content of the method is described in the first embodiment, and is not described herein again.

Example four

The embodiment provides an electronic device, which includes a memory and a processor, wherein the memory stores a computer program, and the computer program implements the method of the first embodiment when executed by the processor.

In this embodiment, the Processor may be an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a controller, a microcontroller, a microprocessor, or other electronic components, and is configured to perform the method in the above embodiments. The method implemented when the computer program running on the processor is executed may refer to the specific embodiment of the method provided in the foregoing embodiment of the present invention, and details thereof are not described herein.

In the embodiments provided in the present invention, it should be understood that the disclosed system and method can be implemented in other ways. The system and method embodiments described above are merely illustrative.

It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.

Although the embodiments of the present invention have been described above, the above descriptions are only for the convenience of understanding the present invention, and are not intended to limit the present invention. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:基于语义的题目作答评判方法、装置及电子设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!