Translation method, training method, device, equipment and storage medium of classification model

文档序号:8308 发布日期:2021-09-17 浏览:45次 中文

1. A method of translation, comprising:

obtaining a current processing unit of a source language text based on the participles in the source language text;

determining a classification result of the current processing unit by adopting a classification model;

and if the classification result is that the current processing unit can be independently translated, translating the current processing unit to obtain a translation result of the target language corresponding to the current processing unit.

2. The method of claim 1, wherein the participle is at least one, and obtaining a current processing unit in the source language based on the participle in the source language text comprises:

selecting one word segmentation as a current word segmentation in sequence from the at least one word segmentation;

all the participles before the current participle are combined into a participle sequence;

and taking the part which is not capable of being translated independently in the word segmentation sequence as the current processing unit of the source language.

3. The method of claim 2, wherein said determining a classification result for the current processing unit using a classification model comprises:

forming a reference sequence based on a preset number of participles after the current participle;

and taking the word segmentation sequence and the reference sequence as the input of the classification model, and processing the input by adopting the classification model to determine the classification result of the current processing unit.

4. A training method of a classification model comprises the following steps:

processing the participles in the original sample to obtain at least one unit sample corresponding to the original sample;

obtaining label information corresponding to each unit sample in the at least one unit sample, wherein the label information is used for identifying whether the unit sample can be translated independently;

constructing training data by adopting each unit sample and label information corresponding to each unit sample;

and training a classification model by adopting the training data.

5. The method of claim 4, wherein the original sample comprises at least one word segmentation, and the processing the original sample to obtain at least one unit sample corresponding to the original sample comprises:

selecting one word segmentation as a current word segmentation in sequence from the at least one word segmentation;

and forming a unit sample by using all the participles before the current participle.

6. The method according to claim 4 or 5, wherein the original sample is a source language text, and the obtaining of the label information corresponding to each of the at least one unit sample comprises:

acquiring a whole sentence translation result of a target language corresponding to the source language text;

translating each unit sample to obtain a unit translation result of a target language corresponding to each unit sample;

and if the unit translation result is the same as at least part of the whole sentence translation result in content and the positions are correspondingly consistent, determining that the tag information is information for identifying the unit sample as a translatable unit.

7. The method according to claim 6, wherein the translating the respective unit samples to obtain the unit translation result of the target language corresponding to the respective unit samples comprises:

and taking each unit sample and a preset number of word segmentations behind each unit sample as the input of a translation model, and translating the input by adopting the translation model to obtain a unit translation result of the target language corresponding to each unit sample.

8. A translation device, comprising:

the acquisition module is used for acquiring a current processing unit of the source language text based on the participles in the source language text;

the classification module is used for determining a classification result of the current processing unit by adopting a classification model;

and the translation module is used for translating the current processing unit to obtain a translation result of the target language corresponding to the current processing unit if the classification result indicates that the current processing unit can be independently translated.

9. The apparatus according to claim 8, wherein the number of the participles is at least one, and the obtaining module is specifically configured to:

selecting one word segmentation as a current word segmentation in sequence from the at least one word segmentation;

all the participles before the current participle are combined into a participle sequence;

and taking the part which is not capable of being translated independently in the word segmentation sequence as the current processing unit of the source language.

10. The apparatus of claim 9, wherein the classification module is specifically configured to:

forming a reference sequence based on a preset number of participles after the current participle;

and taking the word segmentation sequence and the reference sequence as the input of the classification model, and processing the input by adopting the classification model to determine the classification result of the current processing unit.

11. A training apparatus for classification models, comprising:

the processing module is used for processing the participles in the original sample to obtain at least one unit sample corresponding to the original sample;

an obtaining module, configured to obtain tag information corresponding to each unit sample in the at least one unit sample, where the tag information is used to identify whether the unit sample can be translated separately;

the construction module is used for constructing training data by adopting each unit sample and the label information corresponding to each unit sample;

and the training module is used for training a classification model by adopting the training data.

12. The apparatus of claim 11, wherein the raw samples comprise at least one word segmentation, the processing module being specifically configured to:

selecting one word segmentation as a current word segmentation in sequence from the at least one word segmentation;

and forming a unit sample by using all the participles before the current participle.

13. The apparatus of claim 11 or 12, wherein the original sample is source language text, and the obtaining module is specifically configured to:

acquiring a whole sentence translation result of a target language corresponding to the source language text;

translating each unit sample to obtain a unit translation result of a target language corresponding to each unit sample;

and if the unit translation result is the same as at least part of the whole sentence translation result in content and the positions are correspondingly consistent, determining that the tag information is information for identifying the unit sample as a translatable unit.

14. The apparatus of claim 13, wherein the acquisition module is specifically configured to:

and taking each unit sample and a preset number of word segmentations behind each unit sample as the input of a translation model, and translating the input by adopting the translation model to obtain a unit translation result of the target language corresponding to each unit sample.

15. An electronic device, comprising:

at least one processor; and

a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,

the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.

16. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-7.

17. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-7.

Background

The simultaneous interpretation system generally includes an Auto Speech Recognition (ASR) system for performing Speech Recognition on a source language Speech to obtain a source language text corresponding to the source language Speech, and a Machine Translation (MT) system for translating the source language text to obtain a target language text corresponding to the source language text.

Under the simultaneous interpretation or other similar scenes, the balance problem of translation quality and translation time delay needs to be solved.

Disclosure of Invention

The disclosure provides a translation method, a training device, equipment and a storage medium of a classification model.

According to an aspect of the present disclosure, there is provided a translation method including: obtaining a current processing unit of a source language text based on the participles in the source language text; determining a classification result of the current processing unit by adopting a classification model; and if the classification result is that the current processing unit can be independently translated, translating the current processing unit to obtain a translation result of the target language corresponding to the current processing unit.

According to another aspect of the present disclosure, there is provided a training method of a classification model, including: processing the participles in the original sample to obtain at least one unit sample corresponding to the original sample; obtaining label information corresponding to each unit sample in the at least one unit sample, wherein the label information is used for identifying whether the unit sample can be translated independently; constructing training data by adopting each unit sample and label information corresponding to each unit sample; and training a classification model by adopting the training data.

According to another aspect of the present disclosure, there is provided a translation apparatus including: the acquisition module is used for acquiring a current processing unit of the source language text based on the participles in the source language text; the classification module is used for determining a classification result of the current processing unit by adopting a classification model; and the translation module is used for translating the current processing unit to obtain a translation result of the target language corresponding to the current processing unit if the classification result indicates that the current processing unit can be independently translated.

According to another aspect of the present disclosure, there is provided a training apparatus for classification models, including: the processing module is used for processing the participles in the original sample to obtain at least one unit sample corresponding to the original sample; an obtaining module, configured to obtain tag information corresponding to each unit sample in the at least one unit sample, where the tag information is used to identify whether the unit sample can be translated separately; the construction module is used for constructing training data by adopting each unit sample and the label information corresponding to each unit sample; and the training module is used for training a classification model by adopting the training data.

According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the above aspects.

According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method according to any one of the above aspects.

According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of the above aspects.

According to the technical scheme disclosed by the invention, the translation quality and the translation time delay can be effectively balanced.

It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.

Drawings

The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:

FIG. 1 is a schematic diagram according to a first embodiment of the present disclosure;

FIG. 2 is a schematic diagram according to a second embodiment of the present disclosure;

FIG. 3 is a schematic diagram according to a third embodiment of the present disclosure;

FIG. 4 is a schematic diagram according to a fourth embodiment of the present disclosure;

FIG. 5 is a schematic diagram according to a fifth embodiment of the present disclosure;

FIG. 6 is a schematic diagram according to a sixth embodiment of the present disclosure;

FIG. 7 is a schematic diagram according to a seventh embodiment of the present disclosure;

FIG. 8 is a schematic diagram according to an eighth embodiment of the present disclosure;

FIG. 9 is a schematic diagram according to a ninth embodiment of the present disclosure;

fig. 10 is a schematic diagram of an electronic device for implementing any one of the translation method or the training method of the classification model according to the embodiment of the present disclosure.

Detailed Description

Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.

For simultaneous interpretation, high translation quality and low translation delay are important requirements. Generally, the more input information to a translation model, the higher the translation quality, but the higher the translation delay, and therefore, the balance between the translation quality and the translation delay needs to be considered.

Fig. 1 is a schematic diagram according to a first embodiment of the present disclosure, where this embodiment provides a translation method, including:

101. based on the participles in the source language text, a current processing unit in the source language is obtained.

102. And determining the classification result of the current processing unit by adopting a classification model.

103. If the classification result is that the current processing unit is a translatable unit, translating the current processing unit to obtain a translation result of a target language corresponding to the current processing unit.

Taking the simultaneous interpretation as an example, as shown in fig. 2, the simultaneous interpretation system may include an ASR system and an MT system, and the ASR system may perform speech recognition on the source language speech to obtain source language text corresponding to the source language speech. The MT system is used for translating the source language text to obtain a target language text corresponding to the source language text. In the embodiment of the present disclosure, the source language is chinese and the target language is english.

The source language text may include at least one participle, which may be expressed as: x ═ X1, X2.., xT }, where X denotes the source language text, xi (i ═ 1, 2.. T) denotes the ith participle in the source language text, and T is the total number of participles in the source language text.

The source language text may employ various related art word segmentation methods to obtain the at least one segmented word. For example, the source language text is "10 am i went to park", and after word segmentation, the corresponding participles include "am, 10, am, i, went, trip, park", wherein different participles are separated by commas.

In order to ensure the translation quality, generally, the translation is performed in units of sentences, for example, assuming that "10 am me has gone to park" in the above example is a sentence, the translation model needs to wait until the whole sentence "10 am me has gone to park", and obtain the corresponding translation result, such as "At 10a. The time delay of the translation method by taking sentences as units is high.

In order to reduce the time delay, the translation may be performed in units of participles, for example, the translation may be started after a fixed number of participles are delayed. Based on the above example, for example, it may wait until the word segmentation of "10" is received and translate "10 am". However, this division method only considering the number information may cause poor translation quality.

In order to balance translation quality and translation time delay, after the current processing unit is obtained, whether the current processing unit can be translated independently can be judged, and when the current processing unit can be translated independently, the current processing unit is translated.

A Unit that is "individually translatable" may also be referred to as a "translatable Unit (MU)" which refers to the smallest Unit whose translation results are not affected by subsequent inputs.

For example, in the above example, the initial translation result of "morning" is "morning", and as the subsequent continuous input, for example, the input is updated to "am, 10, p", the corresponding translation result is updated to "At 10 a.m", and since the translation result of "morning" is affected by the subsequent input, the "morning" cannot be used as the translatable unit. For another example, the initial translation result of "am, 10, am" is "At 10 a.m", and the corresponding translation result is "At 10 a.m", with the subsequent continuous input, such as the input is updated to "am, 10, am, me", wherein, for the unit "am, 10, am", even if "me" is subsequently input, there is no influence on the translation result, so "am, 10, am" can be used as a translatable unit.

Because the translation result of the current processing unit is not influenced by the subsequent input when the current processing unit is a translatable unit or can be translated independently, the translation quality can be ensured.

In the embodiment, the translation is performed on the current processing unit, which is obtained based on word segmentation, so that the translation can be performed by taking the current processing unit as a unit instead of a sentence as a unit, and the translation time delay can be reduced; by determining the classification result of the current processing unit, the current processing unit is translated when the current processing unit can be translated independently, so that the translation quality can be ensured, and the translation quality and the translation time delay can be balanced.

In some embodiments, the obtaining the current processing unit of the source language based on the participle in the source language text comprises: selecting one word segmentation as a current word segmentation in sequence from the at least one word segmentation; all the participles before the current participle are combined into a participle sequence; and taking the part which is not capable of being translated independently in the word segmentation sequence as the current processing unit of the source language.

Sequential means chronological, for example, based on the above example, the first time selects "morning" as the current participle and the second time selects "10" as the current participle.

Before the current word segmentation includes the current word segmentation, and taking the second time as an example, the first word segmentation sequence corresponding to the second time is "10 am".

The initial states of the participles in the participle sequence are all parts which can not be independently translated, and along with the classification of the current processing unit, parts which can be independently translated can exist in the participle sequence, and then the parts can be removed as the current processing unit.

For example, at the first time, the word segmentation sequence is "morning", and the "morning" is a part which is not separately translatable, so that the "morning" is used as the current processing unit at the first time, and it is assumed that the "morning" is determined that the "morning" cannot be separately translated through the classification processing of the classification model, that is, the "morning" is a part which is not separately translatable. The second time, the word sequence is "10 am", since "am" is a part that is not separately translatable and the initial state of "10" is also a part that is not separately translatable, the "am, 10" of the second time is the current processing unit, and it is determined that "am, 10" is a part that is not separately translatable, assuming that the processing by the classification model is performed. Similarly, the third time, the partial word sequence is "morning, 10, o", since "morning, 10" is a part that is not separately translatable and the initial state of "o" is also a part that is not separately translatable, the "morning, 10, o" of the third time is the current processing unit, and assuming that the "morning, 10, o" is separately translatable through the processing of the classification model, the next time, that is, the partial word sequence of the fourth time is "morning, 10, o, i", since "morning, 10, o" thereof is a part that is separately translatable, it needs to be removed, and therefore, the current processing unit corresponding to the fourth time is "i".

By selecting the current participle in sequence and obtaining the current processing unit based on the current participle, the current processing unit can be classified and translated in sequence, and the method accords with the scene of sequential execution under the actual translation condition.

As shown in fig. 2, after the current processing unit is obtained, the classification model may be used to classify the current processing unit to obtain a classification result corresponding to the current processing unit.

The classification model is a binary classification model. Specifically, the classification result includes: the current processing unit may or may not be individually translatable.

In some embodiments, the determining the classification result of the current processing unit by using the classification model includes: forming a reference sequence based on a preset number of participles after the current participle; and taking the word segmentation sequence and the reference sequence as the input of the classification model, and processing the input by adopting the classification model to determine the classification result of the current processing unit.

Wherein, the "next" in the current word segmentation does not include the current word segmentation, the preset number may be represented by m, m is the number of the reference word, and taking m as 2 as an example, if the current word segmentation is xt, the reference sequence may be represented as: the reference sequence is { x (T +1) }, x (T + m) }, and is selected to be null for portions where T + m is greater than T.

As shown in FIG. 3, for source language text, a segmentation sequence and a reference sequence can be obtained based on the current segmentation, the input of the classification model comprises the segmentation sequence and the reference sequence, and the output of the classification model is the classification result of the current processing unit. Wherein, because the input of the classification model comprises a word segmentation sequence, the output can also be regarded as the classification result of the word segmentation sequence.

By taking the word segmentation sequence and the reference sequence as the input of the classification model, the accuracy of the classification result can be improved.

If the current processing unit is a translatable unit, it is not necessary to wait for subsequent input, and it may perform immediate (simultaneous) translation and translation result output on the current processing unit, where the output form may be a text form or a speech form, for example, output a translation text of a target language corresponding to the current processing unit on a display screen, or perform speech synthesis on the translation text to obtain a speech of the target language, and then play the speech of the corresponding target language through an output device such as a speaker.

Based on the above example, assume that three units that can be translated individually are obtained, i.e., three translatable units, respectively: "morning, 10, am", "i, go, lap", "park". As shown in fig. 4, based on the translatable units, the translation results of the respective translatable units can be obtained immediately (represented by "simultaneous interpretation translation results") without waiting for the whole sentence to be input before obtaining the translation results (represented by "normal text translation results").

The above embodiments take the application process as an example, wherein a classification model is involved, that is, the classification model is required to determine whether a processing unit is a translatable unit or whether the processing unit can be translated separately. The classification model may be trained prior to the application process. The training process of the classification model is explained below.

Fig. 5 is a schematic diagram of a fourth embodiment according to the present disclosure, which provides a method for training a classification model, the method including:

501. processing an original sample to obtain at least one unit sample corresponding to the original sample.

502. And acquiring label information corresponding to each unit sample in the at least one unit sample, wherein the label information is used for identifying whether the unit sample can be translated independently.

503. And constructing training data by adopting each unit sample and the label information corresponding to each unit sample.

504. And training a classification model by adopting the training data.

Still take the sentence "i go to park at 10 am" as an example, and the sentence can be used as an original sample during training.

In some embodiments, the processing the original sample to obtain at least one unit sample corresponding to the original sample comprises: selecting one word segmentation as a current word segmentation in sequence from the at least one word segmentation; and forming a unit sample by using all the participles before the current participle.

Wherein, assuming that the original sample includes T participles, T unit samples can be obtained. Based on the above example, the unit samples ct corresponding to different time instants t may be as shown in table 1:

TABLE 1

Further, after the original sample is processed, a reference sample may also be obtained, where the reference sample ft refers to a sequence formed by a preset number (for example, m is 2) of participles after the current participle.

Thereafter, training data may be constructed based on the triplet < unit sample, reference sample, label information >.

Assuming that the label information is denoted by lt, where lt ═ 1 denotes that the unit sample can be translated alone, and lt ═ 0 denotes that the unit sample cannot be translated alone, the training data may be as shown in table 2:

TABLE 2

t ct ft lt
1 In the morning 10, point 0
2 In the morning, 10 Click on, i 0
3 10 am, am in the morning I, go 1
4 Am, 10 am, i Go to, turn over 0
5 Am, 10 am, me go Turning and park 0
6 Am, 10 am, me goGo to Park 1
7 Morning, 10 am, me, go, turn, park NULL (empty) 1

By composing the unit samples based on the current participle, a plurality of unit samples can be generated based on one original sample, expanding the number of unit samples.

In some embodiments, the obtaining of the label information corresponding to each unit sample in the at least one unit sample includes: acquiring a whole sentence translation result of a target language corresponding to the source language text; translating each unit sample to obtain a unit translation result of a target language corresponding to each unit sample; and if the unit translation result is the same as at least part of the whole sentence translation result in content and the positions are correspondingly consistent, determining that the tag information is information for identifying the unit sample as a translatable unit.

The unit translation result and the whole sentence translation result have at least part of the same content and the corresponding consistent positions, and may be referred to as a prefix (prefix) of the whole sentence translation result.

Assuming that the cell translation results corresponding to the cell samples at different times t are denoted by yt, the source language text, the whole sentence translation result, and the cell translation result can be shown in FIG. 6. Referring to fig. 6, since the unit translation result of "10, am" is "At 10 a.m", which is a prefix of the whole sentence translation result, the tag information lt corresponding to "10, am" is 1; similarly, since the unit translation result of "am, 10, o, I, go, pass" is "At 10a.m I went to", which is a prefix of the whole sentence translation result, the tag information lt of "am, 10, o, I, go, pass" corresponds to 1.

Whether the corresponding unit sample can be translated independently is determined based on whether the unit translation result is the prefix of the whole sentence translation result, so that the semantic integrity of the unit which can be translated independently can be ensured, and the translation quality is improved.

When the unit translation result of each unit sample is obtained, if a common translation mode is adopted, that is, each unit sample is used as the input of a translation model, and the translation model is adopted for translation, all the unit samples cannot be translated independently, and only the whole sentence of the original sample can be translated independently.

For example, the original sample is "a, in beijing, and, B, meeting", and a normal translation method is adopted, and generally, the tag information corresponding to the whole sentence of "a, in beijing, and, B, meeting" is set to 1, and the tag information of the rest unit samples is all 0.

The length of the unit sample with label information of 1 is too long, which results in too long translation delay when the classification model trained by the unit sample is applied.

In order to reduce the translation delay, the length of the unit sample as a translatable unit can be reduced as much as possible.

In some embodiments, the translating the unit samples to obtain the unit translation result of the target language corresponding to the unit samples includes: and taking each unit sample and a preset number of word segmentations behind each unit sample as the input of a translation model, and translating the input by adopting the translation model to obtain a unit translation result of the target language corresponding to each unit sample.

The "preset number" corresponding to the translation is irrelevant to the preset number in the reference sample or the reference sequence, that is, the preset number corresponding to the translation can be represented by k, which is different from m in the reference sample or the reference sequence, and k represents that the translation is performed after delaying k word segments, and the translation mode can be called wait-k translation.

The wait-k translation mode has prediction capability, and a correct translation result can be generated without waiting for the whole sentence to be completely input. For example, taking "a, in beijing, and B, meeting" as an example, when k is 2, the corresponding translation result is as shown in fig. 7, that is, after receiving the participle of "beijing", the translation result can be predicted to be "met", and it is not necessary to wait until receiving the participle of "meeting" to translate "met".

Based on the wait-k translation mode, when the simultaneous interpretation is carried out, the 6 units of "A", "in", "Beijing", "and", "B" and meeting "can be known to be independently translatable, but not" A ", and the unit of the whole sentence of Beijing, B and meeting" can be independently translated, and then each unit capable of being independently translated can be immediately translated, so that the translation time delay is reduced.

When the unit translation result of the unit sample is obtained, the unit sample which is smaller in length and can be translated independently can be obtained by adopting the wait-k mode for translation, and then the unit which is shorter in length and can be translated independently can be identified during translation based on the classification model trained by the training data constructed by the unit sample, so that the translation time delay is reduced.

In the embodiment, the training data of the classification model is constructed through the original samples, so that the quantity of the training data can be expanded; whether the unit samples can be translated independently or not is identified through the label information, a classification model which can identify whether the units can be translated independently or not can be trained, and then the units which are translated independently can be translated, and translation quality and translation time delay are balanced.

Fig. 8 is a schematic diagram according to an eighth embodiment of the present disclosure, which provides a translation apparatus. As shown in fig. 8, the translation apparatus 800 includes: an acquisition module 801, a classification module 802, and a translation module 803.

The obtaining module 801 is configured to obtain a current processing unit of a source language text based on a word segmentation in the source language text; the classification module 802 is configured to determine a classification result of the current processing unit by using a classification model; the translation module 803 is configured to translate the current processing unit to obtain a translation result of the target language corresponding to the current processing unit if the classification result indicates that the current processing unit can be translated separately.

In some embodiments, the word segmentation is at least one, and the obtaining module 801 is specifically configured to: selecting one word segmentation as a current word segmentation in sequence from the at least one word segmentation; all the participles before the current participle are combined into a participle sequence; and taking the part which is not capable of being translated independently in the word segmentation sequence as the current processing unit of the source language.

In some embodiments, the classification module 802 is specifically configured to: forming a reference sequence based on a preset number of participles after the current participle; and taking the word segmentation sequence and the reference sequence as the input of the classification model, and processing the input by adopting the classification model to determine the classification result of the current processing unit.

In the embodiment, the translation is performed on the current processing unit, which is obtained based on word segmentation, so that the translation can be performed by taking the current processing unit as a unit instead of a sentence as a unit, and the translation time delay can be reduced; by determining the classification result of the current processing unit, the current processing unit is translated when the current processing unit can be translated independently, so that the translation quality can be ensured, and the translation quality and the translation time delay can be balanced.

Fig. 9 is a schematic diagram of a ninth embodiment according to the present disclosure, which provides a training apparatus for classification models. As shown in fig. 9, the training apparatus 900 for classification models includes: a processing module 901, an acquisition module 902, a construction module 903, and a training module 904.

The processing module 901 is configured to process the word segmentation in the original sample to obtain at least one unit sample corresponding to the original sample; the obtaining module 902 is configured to obtain label information corresponding to each unit sample in the at least one unit sample, where the label information is used to identify whether the unit sample can be translated separately; the constructing module 903 is configured to construct training data by using the unit samples and the label information corresponding to the unit samples; the training module 904 is configured to train a classification model using the training data.

In some embodiments, the original sample includes at least one word segmentation, and the processing module 901 is specifically configured to: selecting one word segmentation as a current word segmentation in sequence from the at least one word segmentation; and forming a unit sample by using all the participles before the current participle.

In some embodiments, the original sample is a source language text, and the obtaining module 902 is specifically configured to: acquiring a whole sentence translation result of a target language corresponding to the source language text; translating each unit sample to obtain a unit translation result of a target language corresponding to each unit sample; and if the unit translation result is the same as at least part of the whole sentence translation result in content and the positions are correspondingly consistent, determining that the tag information is information for identifying the unit sample as a translatable unit.

In some embodiments, the obtaining module 902 is specifically configured to: and taking each unit sample and a preset number of word segmentations behind each unit sample as the input of a translation model, and translating the input by adopting the translation model to obtain a unit translation result of the target language corresponding to each unit sample.

In the embodiment, the training data of the classification model is constructed through the original samples, so that the quantity of the training data can be expanded; whether the unit samples can be translated independently or not is identified through the label information, a classification model which can identify whether the units can be translated independently or not can be trained, and then the units which are translated independently can be translated, and translation quality and translation time delay are balanced.

It is to be understood that in the disclosed embodiments, the same or similar elements in different embodiments may be referenced.

It is to be understood that "first", "second", and the like in the embodiments of the present disclosure are used for distinction only, and do not indicate the degree of importance, the order of timing, and the like.

The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.

FIG. 10 illustrates a schematic block diagram of an example electronic device 1000 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.

As shown in fig. 10, the electronic device 1000 includes a computing unit 1001 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)1002 or a computer program loaded from a storage unit 10010 into a Random Access Memory (RAM) 1003. In the RAM 1003, various programs and data necessary for the operation of the electronic apparatus 1000 can also be stored. The calculation unit 1001, the ROM 1002, and the RAM 1003 are connected to each other by a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004.

A number of components in the electronic device 1000 are connected to the I/O interface 1005, including: an input unit 1006 such as a keyboard, a mouse, and the like; an output unit 1007 such as various types of displays, speakers, and the like; a storage unit 1008 such as a magnetic disk, an optical disk, or the like; and a communication unit 1009 such as a network card, a modem, a wireless communication transceiver, or the like. The communication unit 1009 allows the electronic device 1000 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.

Computing unit 1001 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 1001 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 1001 performs the respective methods and processes described above, such as a translation method or a training method of a classification model. For example, in some embodiments, the translation method or the training method of the classification model may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as the storage unit 1008. In some embodiments, part or all of the computer program may be loaded and/or installed onto electronic device 1000 via ROM 1002 and/or communications unit 1009. When the computer program is loaded into the RAM 1003 and executed by the computing unit 1001, one or more steps of the translation method or the training method of the classification model described above may be performed. Alternatively, in other embodiments, the computing unit 1001 may be configured by any other suitable means (e.g., by means of firmware) to perform the translation method or the training method of the classification model.

Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.

Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.

In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.

To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.

The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.

The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service ("Virtual Private Server", or simply "VPS"). The server may also be a server of a distributed system, or a server incorporating a blockchain.

It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.

The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:一种基于翻译模板的神经机器翻译方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!