Object information translation and derivative information acquisition method and device

文档序号:7745 发布日期:2021-09-17 浏览:49次 中文

1. A method performed by a user device, comprising:

obtaining source end object information;

identifying a source object based on the source object information;

acquiring target information of a target object corresponding to the source object based on the position information of the user equipment, wherein the target object is available at a location corresponding to the position information of the user equipment;

and displaying the target information on a display device according to the personal information of the user associated with the requirement of the user, wherein the displayed target information comprises a part of highlighted information which is determined based on the requirement of the user.

2. The method of claim 1, wherein the source object information comprises source multimedia information of a source object, the source multimedia information comprising at least one of an image feature, an identification feature, and a text feature associated with the source object.

3. The method of claim 2, wherein identifying a source object comprises:

extracting at least one of image features, identification features and text features associated with the source end object based on source multimedia information of the source end object;

identifying the source object according to the extracted at least one of the image feature, the identification feature and the text feature associated with the source object.

4. The method of claim 1, wherein the target information comprises at least one of image features, identification features, textual features, and descriptive information associated with the target end object.

5. The method of claim 4, wherein the obtaining target information of a target object corresponding to the source object comprises:

searching attribute information corresponding to the preset attribute of the target object;

and determining the searched attribute information as the information associated with the description information.

6. The method of claim 1, wherein the obtaining target information of a target object corresponding to the source object comprises:

and determining the target information of the target object according to the position information of the user equipment and the position information of the target object.

7. The method of claim 1, wherein presenting the target information comprises:

displaying at least one piece of information of at least one target object candidate;

receiving an input signal for selecting at least one target object of the at least one target object candidate;

determining the selected target object as a target object corresponding to the source object, and displaying at least one piece of information of the target object;

wherein the at least one piece of information of the at least one target object candidate is displayed in order of priority of the at least one target object candidate.

8. The method of claim 7, further comprising:

and sending the information related to the selected target object and the source object information to a server.

9. The method of claim 1, wherein identifying a source object comprises:

presenting at least one piece of information of at least one source object candidate;

receiving an input signal for selecting at least one piece of information of the at least one source object candidate;

identifying the source object based on the selected at least one piece of information.

10. The method of claim 1, wherein the user profile includes information relating to a language of the user, and wherein the user profile associated with the user's needs includes at least one of:

schedule information of the user, interests of the user, hobbies of the user, environmental information of the user, motion state of the user and health state of the user.

11. The method of claim 1, wherein identifying a source object comprises:

extracting image features and/or text features based on the obtained source end object information;

and identifying the source object according to the extracted image features and/or text features.

12. The method of claim 11, wherein identifying a source object based on the extracted image features and/or text features comprises:

and identifying the source end object according to the preset object identification model and the extracted image feature and/or text feature.

13. The method of claim 12, wherein the object recognition model is pre-constructed by:

and aiming at each object class, training to obtain an object recognition model corresponding to the object class based on the image features and/or text features of the sampling objects belonging to the object class.

14. The method according to claim 13, wherein identifying a source object according to a preset object recognition model and the extracted image features and/or text features comprises:

identifying the object type of the source object according to the extracted image features and/or text features;

and identifying the source object according to the object identification model corresponding to the object type and the extracted image feature and/or text feature.

15. The method of any of claims 1-14, wherein said obtaining target information of a target object corresponding to the source object comprises:

determining language environments corresponding to a source end object and a target end object respectively;

and obtaining target information of the target end object corresponding to the source end object based on the language environments respectively corresponding to the source end object and the target end object.

16. The method of claim 15, wherein the linguistic environment corresponding to the source object is determined based on at least one of:

the language environment corresponding to the position detected by the position detection module, the language environment identified from the acquired source object information, the language environment corresponding to the source object searched in a preset object knowledge map database and the language environment set by a user;

the language environment corresponding to the target object is determined according to at least one of the following items:

the language environment corresponding to the position detected by the position detection module, the pre-designated language environment, the language environment set by the user and the language environment determined based on the personal information of the user.

17. The method of claim 15, wherein obtaining target information of a target object corresponding to a source object based on language environments corresponding to the source object and the target object respectively comprises:

selecting a corresponding object alignment model based on the language environments respectively corresponding to the source end object and the target end object;

and acquiring target information of a target end object corresponding to the identified source end object based on the selected object alignment model.

18. The method of claim 17, wherein obtaining target information for a target end object corresponding to the identified source end object based on the selected object alignment model comprises:

acquiring text features and/or image features corresponding to the source end object based on the source end object information;

and obtaining target information of a target end object corresponding to the identified source end object based on the acquired text feature and/or image feature and the selected object alignment model.

19. The method of claim 18, wherein the object alignment model is pre-constructed by:

determining text characteristics and/or image characteristics between a sampling source end object and a corresponding sampling target end object;

and training to obtain an object alignment model according to the determined text features and/or image features.

20. The method of claim 17, further comprising:

optimizing the object alignment model based on at least one of:

and the user aims at the feedback information of the target end object information obtained by translation, the language environment corresponding to the source end object and the language environment corresponding to the target end object.

21. The method according to any of claims 1-14, wherein the source object is plural, and plural source objects belong to a same category of combined objects;

the obtaining of the target information of the target end object corresponding to the source end object includes at least one of:

aiming at source end combined objects corresponding to a plurality of source end objects, obtaining corresponding target end combined object information;

and respectively obtaining target information of the target end object corresponding to each source end object.

22. The method according to any one of claims 1-14, wherein when there are a plurality of said target objects, said presenting said target information comprises:

and arranging and sorting the target information of the target end objects and outputting the target information.

23. The method of claim 22, wherein the target information for each target object is ordered based on at least one of:

correlation with a source object, user behavior for target information of a target object, attribute information of the target object, and user personal information.

24. The method of claim 2, wherein the source multimedia information further comprises at least one of:

the source object information comprises multimedia information corresponding to the source object, text information identified from the multimedia information corresponding to the source object, position information corresponding to the source object, searched source object related information and source object related information input by a user.

25. The method of claim 24, wherein the multimedia information is collected in real time by a multimedia collection device, and the multimedia information collected in real time is used as the multimedia information corresponding to the source object.

26. The method of claim 25, wherein the source object is plural;

the method further comprises the following steps:

detecting the selection operation of a user on the target information of the target end object obtained by translation;

and positioning a source object corresponding to the target information of the target object selected by the user in the multimedia information collected in real time.

27. The method according to any one of claims 1-14, wherein presenting the target information comprises:

acquiring multimedia information in real time through multimedia acquisition equipment;

and positioning a target end object corresponding to the source end object in the multimedia information acquired in real time.

28. The method according to any of claims 1-14, wherein before obtaining target information of a target end object corresponding to a source end object, further comprising:

predicting the translation intention of a user, and pre-storing a processing model for off-line translation according to the translation intention of the user;

the obtaining target information of a target end object corresponding to the source end object based on the location information of the user equipment includes:

and acquiring target information of a target end object corresponding to the source end object by utilizing the stored processing model based on the position information of the user equipment.

29. The method of claim 28, wherein the user's translation intent is predicted from at least one of:

user schedule, user personal information, equipment environment information and equipment motion state.

30. The method according to any one of claims 1-14, wherein said presenting said target information comprises:

displaying the target information by at least one of the following modes: text, image, audio, video.

31. The method according to any one of claims 1-14, wherein said presenting said target information comprises:

the target information is adaptively adjusted based on at least one of the following items, and the adjusted target information is output:

the device type of the device, the storage state of the device, the current network condition, the power state of the device, and the personal information of the user.

32. The method according to any one of claims 1-14, further comprising:

acquiring derived information associated with the target end object and/or derived information associated with the source end object;

and outputting the acquired derivative information.

33. The method of claim 32, wherein obtaining derived information associated with the target-end object comprises:

searching attribute information corresponding to the preset attribute of the target object from a preset object knowledge map database;

confirming the searched attribute information as the derivative information related to the target end object;

acquiring derived information associated with the source object, including:

searching attribute information corresponding to the preset attribute of the source object from a preset object knowledge map database;

and confirming the searched attribute information as the derived information associated with the source object.

34. The method of claim 33, wherein the predetermined attribute is determined according to a class of the target end object and/or the source end object.

35. The method of claim 32, wherein obtaining derived information associated with the target-end object comprises:

determining derived information related to the target end object based on the position information corresponding to the target end object;

acquiring derived information associated with the source object, including:

and determining the derived information related to the source object based on the position information corresponding to the source object.

36. The method of claim 32, wherein if deriving information associated with the target object and deriving information associated with the source object are obtained, outputting the obtained deriving information comprises:

positioning the derived information needing to be highlighted according to the correlation between the derived information associated with the target end object and the derived information associated with the source end object;

and outputting the acquired derivative information, and highlighting the positioned derivative information.

37. The method of claim 32, wherein the source object is plural, and plural source objects belong to a same category of composite objects;

acquiring derived information related to the target end object, wherein the derived information includes at least one of the following items:

aiming at source end combined objects corresponding to a plurality of source end objects, acquiring derivative information associated with corresponding target end combined object information;

and respectively acquiring the derivative information associated with the corresponding target end object aiming at each source end object.

38. The method of claim 32, wherein outputting the derived information obtained comprises:

determining a language environment corresponding to the obtained derived information according to the personal information of the user;

displaying the derived information based on the determined language context.

39. The method of claim 32, further comprising:

positioning the derived information needing to be highlighted according to the personal information of the user;

the located derivative information is highlighted.

40. The method of claim 32, further comprising:

and generating or changing related reminding events according to the personal information of the user and/or the acquired derived information.

41. The method of any of claims 1-14, wherein obtaining target information for a target end object corresponding to a source end object comprises:

acquiring source end object information to be sent to a receiver;

acquiring target information of a target end object corresponding to the source end object based on the language environment of the receiver and the acquired source end object information;

the displaying the target information comprises:

and sending the obtained target information to the receiving party.

42. An electronic device, comprising:

the object information translation unit is used for acquiring source object information, identifying a source object based on the source object information, and acquiring target information of a target object corresponding to the source object based on the position information of the user equipment; wherein the target object is available at a location corresponding to the location information of the user device;

and the information output unit is used for displaying the target information obtained by the object information translation unit on the display equipment according to the personal information of the user associated with the requirement of the user, wherein the output target information comprises highlighted information which is determined based on the requirement of the user.

43. The device of claim 42, wherein the source object information comprises source multimedia information of a source object, the source multimedia information comprising at least one of an image feature, an identification feature, and a text feature associated with the source object.

44. The apparatus according to claim 43, wherein the object information translation unit is configured to:

extracting at least one of image features, identification features and text features associated with the source end object based on source multimedia information of the source end object;

identifying the source object according to the extracted at least one of the image feature, the identification feature and the text feature associated with the source object.

45. The device of claim 42, wherein the target information comprises at least one of image features, identification features, textual features, and descriptive information associated with the target end object.

46. The apparatus according to claim 45, wherein the object information translation unit is configured to:

searching attribute information corresponding to the preset attribute of the target object;

and determining the searched attribute information as the information associated with the description information.

47. The apparatus according to claim 42, wherein the object information translation unit is configured to:

and determining the target information of the target object according to the position information of the user equipment and the position information of the target object.

48. The apparatus of claim 42, wherein the information output unit, when presenting the target information, is configured to:

displaying at least one piece of information of at least one target object candidate;

receiving an input signal for selecting at least one target object of the at least one target object candidate;

determining the selected target object as a target object corresponding to the source object, and displaying at least one piece of information of the target object;

wherein the at least one piece of information of the at least one target object candidate is displayed in order of priority of the at least one target object candidate.

49. The apparatus of claim 48, wherein the information output unit is further configured to:

and sending the information related to the selected target object and the source object information to a server.

50. The apparatus according to claim 42, wherein the object information translation unit is configured to:

presenting at least one piece of information of at least one source object candidate;

receiving an input signal for selecting at least one piece of information of the at least one source object candidate;

identifying the source object based on the selected at least one piece of information.

51. The method of claim 42, wherein the user profile comprises information associated with a language of the user, and wherein the user profile associated with the user's needs comprises at least one of:

schedule information of the user, interests of the user, hobbies of the user, environmental information of the user, motion state of the user and health state of the user.

52. The apparatus according to claim 42, wherein the object information translation unit, when identifying a source object, is configured to:

extracting image features and/or text features based on the obtained source end object information;

and identifying the source object according to the extracted image features and/or text features.

53. The apparatus according to claim 52, wherein the object information translation unit is configured to:

and identifying the source end object according to the preset object identification model and the extracted image feature and/or text feature.

54. The apparatus of claim 53, wherein the object recognition model is pre-constructed by:

and aiming at each object class, training to obtain an object recognition model corresponding to the object class based on the image features and/or text features of the sampling objects belonging to the object class.

55. The apparatus according to claim 54, wherein the object information translation unit is configured to:

identifying the object type of the source object according to the extracted image features and/or text features;

and identifying the source object according to the object identification model corresponding to the object type and the extracted image feature and/or text feature.

56. The apparatus according to any of claims 42-55, wherein the object information translation unit, when obtaining target information of a target end object corresponding to the source end object, is configured to:

determining language environments corresponding to a source end object and a target end object respectively;

and obtaining target information of the target end object corresponding to the source end object based on the language environments respectively corresponding to the source end object and the target end object.

57. The device of claim 56, wherein the language locale corresponding to the source object is determined based on at least one of:

the language environment corresponding to the position detected by the position detection module, the language environment identified from the acquired source object information, the language environment corresponding to the source object searched in a preset object knowledge map database and the language environment set by a user;

the language environment corresponding to the target object is determined according to at least one of the following items:

the language environment corresponding to the position detected by the position detection module, the pre-designated language environment, the language environment set by the user and the language environment determined based on the personal information of the user.

58. The apparatus according to claim 56, wherein the object information translation unit is configured to:

selecting a corresponding object alignment model based on the language environments respectively corresponding to the source end object and the target end object;

and acquiring target information of a target end object corresponding to the identified source end object based on the selected object alignment model.

59. The apparatus according to claim 58, wherein the object information translation unit, when acquiring target information of a target end object corresponding to the identified source end object based on the selected object alignment model, is configured to:

acquiring text features and/or image features corresponding to the source end object based on the source end object information;

and obtaining target information of a target end object corresponding to the identified source end object based on the acquired text feature and/or image feature and the selected object alignment model.

60. The apparatus according to claim 59, wherein the object alignment model is pre-constructed by:

determining text features and/or image features between a sampling source end object and a corresponding sampling target end object;

and training to obtain an object alignment model according to the determined text features and/or image features.

61. The apparatus according to claim 58, wherein the object information translation unit is further configured to:

optimizing the object alignment model based on at least one of:

and the user aims at the feedback information of the target end object information obtained by translation, the language environment corresponding to the source end object and the language environment corresponding to the target end object.

62. The apparatus of any of claims 42-55, wherein the source object is plural and plural source objects belong to a same category of composite object;

the object information translation unit is configured to, when acquiring target information of a target object corresponding to a source object, perform at least one of:

aiming at source end combined objects corresponding to a plurality of source end objects, obtaining corresponding target end combined object information;

and respectively obtaining target information of the target end object corresponding to each source end object.

63. The apparatus according to any one of claims 42 to 55, wherein when there are a plurality of target end objects, the information output unit is configured to:

and arranging and sorting the target information of the target end objects and outputting the target information.

64. The apparatus of claim 63, wherein the target information for each target object is ordered according to at least one of:

correlation with a source object, user behavior for target object information, attribute information of a target object, and user personal information.

65. The apparatus of claim 43, wherein the source multimedia information comprises at least one of:

the source object information comprises multimedia information corresponding to the source object, text information identified from the multimedia information corresponding to the source object, position information corresponding to the source object, searched source object related information and source object related information input by a user.

66. The device of claim 65, wherein the multimedia information is collected by a multimedia collection device in real time, and the multimedia information collected in real time is used as the multimedia information corresponding to the source object.

67. The apparatus of claim 66, wherein the source object is plural;

the object information translation unit is further configured to:

detecting the selection operation of a user for the target end object information obtained by translation;

and positioning a source object corresponding to the target information of the target object selected by the user in the multimedia information collected in real time.

68. The apparatus according to any of claims 42-55, wherein the information output unit, when presenting the target information, is configured to:

acquiring multimedia information in real time through multimedia acquisition equipment;

and positioning a target end object corresponding to the source end object in the multimedia information acquired in real time.

69. The apparatus according to any one of claims 42 to 55, wherein the object information translation unit is further configured to:

predicting the translation intention of a user, and pre-storing a processing model for off-line translation according to the translation intention of the user;

the object information translation unit, when acquiring target information of a target end object corresponding to a source end object, is configured to:

and acquiring target information of a target end object corresponding to the source end object by using the stored processing model based on the acquired source end object information.

70. The apparatus of claim 69, wherein the user's translation intent is predicted from at least one of:

user schedule, user personal information, equipment environment information and equipment motion state.

71. The apparatus of any one of claims 42-55, wherein said presenting the target information comprises:

outputting the target information by at least one of: text, image, audio, video.

72. The apparatus of any one of claims 42-55, wherein the information output unit is configured to:

the target information of the target end object is adaptively adjusted based on at least one of the following items, and the adjusted target information is output:

the device type of the device, the storage state of the device, the current network condition, the power state of the device, and the personal information of the user.

73. The apparatus of any one of claims 42-55, wherein the information output unit is further configured to:

acquiring derived information associated with the target end object and/or derived information associated with the source end object;

and outputting the acquired derivative information.

74. The apparatus according to claim 73, wherein the information output unit, when obtaining derived information associated with the target end object, is configured to:

searching attribute information corresponding to the preset attribute of the target object from a preset object knowledge map database;

confirming the searched attribute information as the derivative information related to the target end object;

acquiring derived information associated with the source object, including:

searching attribute information corresponding to the preset attribute of the source object from a preset object knowledge map database;

and confirming the searched attribute information as the derived information associated with the source object.

75. The device of claim 74, wherein the predetermined attribute is determined according to a class of the target end object and/or the source end object.

76. The apparatus according to claim 73, wherein the information output unit, when obtaining derived information associated with the target end object, is configured to:

determining derived information related to the target end object based on the position information corresponding to the target end object;

acquiring derived information associated with the source object, including:

and determining the derived information related to the source object based on the position information corresponding to the source object.

77. The apparatus of claim 73, wherein if derived information associated with the target object and derived information associated with the source object are obtained, the information output unit, when outputting the obtained derived information, is configured to:

positioning the derived information needing to be highlighted according to the correlation between the derived information associated with the target end object and the derived information associated with the source end object;

and outputting the acquired derivative information, and highlighting the positioned derivative information.

78. The apparatus of claim 73 wherein the source object is plural and plural source objects belong to a same class of composite object;

the information output unit is used for executing at least one of the following steps when acquiring the derivative information related to the target end object information:

aiming at source end combined objects corresponding to a plurality of source end objects, acquiring derivative information associated with corresponding target end combined object information;

and respectively acquiring the derivative information associated with the corresponding target end object aiming at each source end object.

79. The apparatus according to claim 73, wherein the information output unit, when outputting the derived information obtained, is configured to:

determining a language environment corresponding to the obtained derived information according to the personal information of the user;

displaying the derived information based on the determined language context.

80. The apparatus of claim 73, wherein the information output unit is further configured to:

positioning the derived information needing to be highlighted according to the personal information of the user;

the located derivative information is highlighted.

81. The apparatus of claim 73, wherein the information output unit is further configured to:

and generating or changing related reminding events according to the personal information of the user and/or the acquired derived information.

82. The apparatus according to any one of claims 42 to 55, wherein the object information translation unit is configured to:

acquiring source end object information to be sent to a receiver;

obtaining target information of a target end object corresponding to the source end object based on the language environment of the receiving party and the obtained source end object information;

the information output unit is used for displaying the target information:

and sending the obtained target information to the receiving party.

83. An electronic device, comprising a memory and a processor;

the memory is used for storing a computer program;

the processor, when executing the computer program, performs the method of any of claims 1-41.

84. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method of any one of claims 1-41.

Background

Machine translation technology is a technology that automatically translates one language into another language. Compared with manual translation, machine translation has the advantages of high translation efficiency, low learning cost and translation cost, easiness in expansion to different languages and the like, so that with the vigorous development of the industries such as international economic trade, international tourism and the like, the convenience degree of a user in international communication is improved by a machine translation technology, and the function in the international communication is increasingly important.

Depending on the framework of machine translation algorithms, the current machine translation techniques can be broadly classified into the following three methods:

1) rule-based machine translation

The linguists write the translation rules, deal with rule conflicts, maintain rule bases, and when a sentence is input, generate a translation according to the translation rules.

2) Statistical machine translation

And automatically learning word alignment between parallel sentence pairs according to the parallel corpus, automatically extracting translation rules, estimating translation probability, establishing a translation model and carrying out parameter tuning. When a sentence is input, N translation candidates are generated according to the translation rules, the candidates are scored and ordered according to the translation models and the language models, and the translation candidate ranked at the 1 st is used as a final translation.

3) Neural network machine translation

And automatically learning network parameters according to the parallel corpus and the training method of the recurrent neural network. When a sentence is input, a translation is automatically generated according to the network structure.

According to different machine translation input objects, the current machine translation technology can be divided into:

1) text machine translation

The user inputs the text and directly performs machine translation on the text.

2) Speech machine translation

And detecting a voice input signal, performing voice recognition to generate a corresponding text, and then performing machine translation on the text.

3) Picture machine translation

Detecting picture input, carrying out optical character recognition on the picture to obtain a text in the picture, and then carrying out machine translation on the text.

The inventor of the invention finds that the passive translation mode cannot really meet the actual requirements of users because the existing machine translation technology generally only focuses on translation per se and translates what contents input by the users. Also, although the input object may be text, voice, picture, the translation object is only text including text recognized from voice, or text extracted from picture. Only the text is translated, and sometimes the user still cannot really understand the translated object and cannot really meet the translation requirement of the user.

Disclosure of Invention

Aiming at the defects in the prior art, the invention provides a method and a device for translating object information and acquiring derived information, which are used for expanding the range of machine translation objects, enhancing the applicability of translation and meeting the translation requirements of users on objects.

The invention provides an object information translation method, which comprises the following steps:

translating to obtain target end object information corresponding to the source end object based on the obtained source end object information;

and outputting the object information of the target end.

Preferably, the language environment corresponding to the source object is different from the language environment corresponding to the target object.

Preferably, the translating, based on the obtained source object information, to obtain target object information corresponding to the source object includes:

identifying a source object to be translated based on the obtained source object information;

and translating the identified source end object to obtain corresponding target end object information.

Preferably, the identifying a source object to be translated includes:

extracting image features and/or text features based on the obtained source end object information;

and identifying the source object to be translated according to the extracted image features and/or text features.

Preferably, the identifying a source object to be translated according to the extracted image feature and/or text feature includes:

and identifying the source end object to be translated according to the preset object identification model and the extracted image feature and/or text feature.

Preferably, the object recognition model is pre-constructed by:

and aiming at each object class, training to obtain an object recognition model corresponding to the object class based on the image features and/or text features of the sampling objects belonging to the object class.

Preferably, the identifying a source object to be translated according to a preset object recognition model and the extracted image features and/or text features includes:

identifying the object type of the source end object to be translated according to the extracted image features and/or text features;

and identifying a source end object to be translated according to the object identification model corresponding to the object type and the extracted image feature and/or text feature.

Preferably, for the identified source object, translating to obtain corresponding target object information includes:

determining language environments corresponding to a source end object and a target end object respectively;

and translating to obtain target end object information corresponding to the source end object based on the language environments respectively corresponding to the source end object and the target end object.

Preferably, the language environment corresponding to the source object is determined according to at least one of the following:

the language environment corresponding to the position detected by the position detection module, the language environment identified from the acquired source object information, the language environment corresponding to the source object searched in a preset object knowledge map database and the language environment set by a user;

the language environment corresponding to the target object is determined according to at least one of the following items:

the language environment corresponding to the position detected by the position detection module, the pre-designated language environment, the language environment set by the user and the language environment determined based on the personal information of the user.

Preferably, the translating, based on the language environments corresponding to the source object and the target object respectively, to obtain target object information corresponding to the identified source object includes:

selecting a corresponding object alignment model based on the language environments respectively corresponding to the source end object and the target end object;

and translating to obtain target end object information corresponding to the identified source end object based on the selected object alignment model.

Preferably, translating, based on the selected object alignment model, to obtain target object information corresponding to the identified source object, includes:

acquiring text features and/or image features corresponding to the source end object based on the source end object information;

and translating to obtain target end object information corresponding to the identified source end object based on the acquired text characteristics and/or image characteristics and the selected object alignment model.

Preferably, the object alignment model is pre-constructed by:

determining text features and/or image features between a sampling source end object and a corresponding sampling target end object;

and training to obtain an object alignment model according to the determined text features and/or image features.

Preferably, the object information translation method further includes:

optimizing the object alignment model based on at least one of:

and the user aims at the feedback information of the target end object information obtained by translation, the language environment corresponding to the source end object and the language environment corresponding to the target end object.

Preferably, the source end objects are multiple, and the multiple source end objects belong to a combined object of the same category;

the target end object information corresponding to the source end object is obtained through translation, and the target end object information comprises at least one of the following items:

the method comprises the steps that corresponding target end composition body information is obtained through translation aiming at source end composition bodies corresponding to a plurality of source end bodies;

and respectively translating each source end object to obtain the corresponding target end object information.

Preferably, when there are a plurality of target objects, the outputting the target object information includes:

and arranging and sequencing the object information of each target end and outputting the object information.

Preferably, the object information of each target end is sorted according to at least one of the following information:

correlation with a source object, user behavior for target object information, attribute information of a target object, and user personal information.

Preferably, the source object information includes at least one of:

the source object information comprises multimedia information corresponding to the source object, text information identified from the multimedia information corresponding to the source object, position information corresponding to the source object, searched source object related information and source object related information input by a user.

Preferably, the source object information includes multimedia information corresponding to a source object;

acquiring source object information by the following method:

and acquiring multimedia information in real time through multimedia acquisition equipment, and taking the multimedia information acquired in real time as source object information.

Preferably, the source end objects are multiple;

the object information translation method further includes:

detecting the selection operation of a user for the target end object information obtained by translation;

and positioning a source object corresponding to the target object information selected by the user in the multimedia information collected in real time.

Preferably, outputting the target object information includes:

acquiring multimedia information in real time through multimedia acquisition equipment;

and positioning a target end object corresponding to the source end object in the multimedia information acquired in real time.

Preferably, before translating the information of the target object corresponding to the source object based on the obtained source object information, the method further includes:

predicting the translation intention of a user, and pre-storing a processing model for off-line translation according to the translation intention of the user;

based on the obtained source object information, translating to obtain target object information corresponding to the source object, including:

and translating to obtain target end object information corresponding to the source end object by utilizing the stored processing model based on the obtained source end object information.

Preferably, the translation intention of the user is predicted according to at least one of the following information:

user schedule, user personal information, equipment environment information and equipment motion state.

Preferably, the outputting the target object information includes:

outputting the object information of the target end by at least one of the following modes: text, image, audio, video.

Preferably, the outputting the target object information includes:

performing adaptive adjustment on the target object information based on at least one of the following items, and outputting the adjusted target object information:

the device type of the device, the storage state of the device, the current network condition, the power state of the device, and the personal information of the user.

Preferably, the object information translation method further includes:

acquiring derived information associated with the target end object and/or derived information associated with the source end object;

and outputting the acquired derivative information.

Preferably, obtaining the derived information related to the target object includes:

searching attribute information corresponding to the preset attribute of the target object from a preset object knowledge map database;

confirming the searched attribute information as the derivative information related to the target end object;

acquiring derived information associated with the source object, including:

searching attribute information corresponding to the preset attribute of the source object from a preset object knowledge map database;

and confirming the searched attribute information as the derived information associated with the source object.

Preferably, the preset attribute is determined according to a category of the target end object and/or the source end object.

Preferably, obtaining the derived information related to the target object includes:

determining derived information related to the target end object based on the position information corresponding to the target end object;

acquiring derived information associated with the source object, including:

and determining the derived information related to the source object based on the position information corresponding to the source object.

Preferably, if the derived information associated with the target end object and the derived information associated with the source end object are obtained, outputting the obtained derived information includes:

positioning the derived information needing to be highlighted according to the correlation between the derived information associated with the target end object and the derived information associated with the source end object;

and outputting the acquired derivative information, and highlighting the positioned derivative information.

Preferably, the source end objects are multiple, and the multiple source end objects belong to a combined object of the same category;

acquiring derived information related to the target end object information obtained through translation, wherein the derived information includes at least one of the following items:

aiming at source end combined objects corresponding to a plurality of source end objects, acquiring derivative information associated with corresponding target end combined object information;

and respectively acquiring the derivative information associated with the corresponding target end object information for each source end object.

Preferably, outputting the derived information includes:

determining a language environment corresponding to the obtained derived information according to the personal information of the user;

displaying the derived information based on the determined language context.

Preferably, the object information translation method further includes:

positioning the derived information needing to be highlighted according to the personal information of the user;

the located derivative information is highlighted.

Preferably, the object information translation method further includes:

and generating or changing related reminding events according to the personal information of the user and/or the acquired derived information.

Preferably, translating, based on the obtained source object information, to obtain target object information corresponding to the source object, includes:

acquiring source end object information to be sent to a receiver;

translating to obtain target end object information corresponding to the source end object based on the language environment of the receiver and the obtained source end object information;

outputting the target end object information, including:

and sending the target end object information obtained by translation to the receiving party.

The invention also provides a derivative information acquisition method, which comprises the following steps:

determining derived information associated with the object based on the obtained object information;

and outputting the determined derivative information.

Preferably, the determining derived information associated with the object based on the obtained object information includes:

identifying a corresponding object based on the acquired object information;

searching attribute information corresponding to the preset attribute of the identified object from a preset object knowledge map database;

and confirming the searched attribute information as the derivative information related to the object.

Preferably, the preset attribute is determined according to an object class of the object.

Preferably, the determining derived information associated with the object based on the obtained object information includes:

identifying a corresponding object based on the acquired object information;

and determining derivative information related to the object based on the position information corresponding to the object.

Preferably, the objects are a plurality of objects, and the plurality of objects belong to a same category of combined objects;

determining derivative information associated with the object, including at least one of:

for a combined object corresponding to a plurality of objects, determining derivative information associated with the combined object;

for a plurality of objects, derived information associated with each object is acquired.

Preferably, the outputting the determined derivative information includes:

determining a language environment corresponding to the obtained derived information according to the personal information of the user;

displaying the derived information based on the determined language context.

Preferably, the derivative information obtaining method further includes:

positioning the derived information needing to be highlighted according to the personal information of the user;

the located derivative information is highlighted.

Preferably, the derivative information obtaining method further includes:

and generating or changing related reminding events according to the personal information of the user and/or the acquired derived information.

Based on the object information translation method, the invention also provides an object information translation device, which comprises:

the object information translation unit is used for translating and obtaining target end object information corresponding to the source end object based on the obtained source end object information; (ii) a

And the information output unit is used for outputting the target end object information translated by the object information translation unit.

Based on the derivative information acquisition method, the invention also provides a derivative information acquisition device, which comprises:

a derived information acquisition unit for determining derived information associated with the object based on the acquired object information;

and the information output unit is used for outputting the derived information determined by the derived information acquisition unit.

The technical scheme of the invention is to translate the object, not only the characters, so that the problem that the translation task cannot be completely covered by the translation of the text can be solved. Foreign objects which are unfamiliar to the user can be translated into corresponding objects which are familiar to the user in China; or the translation direction is exchanged, and the domestic object is translated into the corresponding object of the target country. Therefore, compared with the existing translation object which only has text, the scheme provided by the invention can meet the translation requirement of the user on the object, expand the range of the machine translation object and enhance the applicability of translation.

The scheme of the invention also provides a concept of derivative information. The scheme of the invention can analyze the object, provide the derived information of the related object and enhance the understanding of the user to the object information.

Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.

Drawings

Fig. 1 is a schematic flow chart of an object information translation method according to a first embodiment of the present invention;

FIG. 2 is a diagram illustrating a single object input according to a second embodiment of the present invention;

FIG. 3 is a diagram illustrating a multi-object input according to a second embodiment of the present invention;

fig. 4 is a schematic diagram of interactive object translation of single-object augmented reality according to a third embodiment of the present invention;

FIG. 5 is a schematic diagram of the derived translation of a single object according to a fourth embodiment of the present invention;

fig. 6 is a schematic diagram illustrating an automatic comparison between a source end object and a target end object according to a fourth embodiment of the present invention;

FIG. 7 is a diagram illustrating multi-object derived translation according to a fourth embodiment of the present invention;

FIG. 8 is a schematic diagram illustrating automatic comparison of two ends of a multi-object according to a fourth embodiment of the present invention;

fig. 9 is a schematic diagram of personalized derivative information according to a fifth embodiment of the present invention;

FIG. 10 is a schematic diagram illustrating location-based translation result disambiguation according to a fifth embodiment of the present invention;

FIG. 11 is a diagram of object translation in a social network according to a fifth embodiment of the present invention;

FIG. 12 is a diagram illustrating the translation of the derived slogan according to the fifth embodiment of the present invention;

FIGS. 13(a), 13(b), and 13(c) are schematic diagrams of the derivative translation-automatic reminding function according to the fifth embodiment of the present invention;

14(a), 14(b) are schematic diagrams of multi-modal output of a fifth embodiment of the present invention;

fig. 15(a) and 15(b) are schematic diagrams illustrating translation of object information on a smart watch according to a sixth embodiment of the present invention;

fig. 16 is a schematic diagram of a derivative translation-automatic reminding function of a smart watch according to a sixth embodiment of the present invention;

fig. 17(a) and 17(b) are schematic diagrams illustrating translation of object information on smart glasses according to a sixth embodiment of the present invention;

18(a), 18(b) are schematic diagrams of machine translation of a signboard applied to smart glasses according to a sixth embodiment of the present invention;

FIG. 19 is a diagram illustrating collecting user behavior logs according to a sixth embodiment of the present invention;

FIG. 20 is a diagram illustrating a sixth embodiment of a crowdsourcing translation feedback mechanism;

FIG. 21 is a diagram illustrating a sixth embodiment of the present invention in which answers are input by a user;

FIG. 22 is a diagram illustrating a canonical incomplete input according to a seventh embodiment of the present invention;

23(a), 23(b), 23(c) are translation diagrams of dishes of a specific example of the seventh embodiment of the present invention;

24(a), 24(b), 24(c), 24(d) are schematic diagrams illustrating the translation of dishes according to another specific example of the seventh embodiment of the present invention;

FIG. 25 is a diagram illustrating a database categorized according to user's intention in accordance with a seventh embodiment of the present invention;

fig. 26 is a schematic structural view of an object information translation apparatus according to an eighth embodiment of the present invention;

fig. 27 is a flowchart illustrating a derived information obtaining method according to a ninth embodiment of the present invention;

fig. 28 is a schematic structural diagram of a derivative information acquiring apparatus according to a ninth embodiment of the present invention.

Detailed Description

The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it is obvious that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

As used in this application, the terms "module," "device," and the like are intended to encompass a computer-related entity, such as but not limited to hardware, firmware, a combination of hardware and software, or software in execution. For example, a module may be, but is not limited to: a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. For example, an application running on a computing device and the computing device may both be a module. One or more modules may reside within a process and/or thread of execution and a module may be localized on one computer and/or distributed between two or more computers.

The inventor of the present invention found that in the existing translation scheme, although the input object may be text, voice, or picture, the translated object is only text, including text recognized from voice, or text extracted from picture. In practical applications, if only text is translated, the user may still not really understand the translated object. Such as the names of medicines in foreign drug stores, or trade names with a strong localized naming feature in foreign stores, etc., the user may have difficulty understanding even if the names are translated. Moreover, in extreme cases, the content that the user wishes to translate may not even have text, such as the various driving signs encountered while driving by oneself abroad. Therefore, the translation of the text cannot completely cover the requirement of the translation task, and the translation of the text alone cannot really meet the translation requirement of the user.

Therefore, the inventors of the present invention considered that target object information corresponding to a source object can be translated based on the acquired source object information. For example, a source object to be translated may be identified based on the obtained source object information; and translating the identified source end object to obtain corresponding target end object information. And then outputting the target end object information obtained by translation. Therefore, foreign source objects which are unfamiliar to the user, such as medicines, foods, cosmetics, road sign signs and the like, can be translated into corresponding target objects which are familiar to the user in China; or the translation direction is exchanged, and the domestic source end object is translated into the corresponding target end object of the target country, so that the translation requirement of the user on the object is met, the range of the machine translation object is expanded, and the applicability of translation is enhanced.

The technical scheme of the invention is explained in detail in the following with the accompanying drawings.

Example one

An embodiment of the present invention provides an object information translation method, as shown in fig. 1, a specific process of the method may include the following steps:

s101: and identifying the source object to be translated based on the acquired source object information.

As will be understood by those skilled in the art, in the present invention, an "object" refers to an individual whose form can be captured by a sensor and described in language.

In the scheme of the invention, the object to be translated can be called a source object, and the translated object is called a target object. One or more source end objects to be translated are provided; when the source object is a plurality of objects, the plurality of source objects may be combined objects belonging to the same category, or may be a plurality of objects belonging to different categories. The following describes in detail the translation schemes of a single source object and multiple source objects, respectively, in the second embodiment.

The source object information refers to information related to the source object, and may include at least one of the following modalities: text, pictures, audio, video, etc.

In practical application, the source object information may be captured and acquired directly by the terminal, for example, captured and acquired by a camera. Alternatively, information about the source object of the transmission may be obtained from a network or other device. The obtained source object information may include at least one of: the source object information comprises multimedia information corresponding to the source object, text information identified from the multimedia information corresponding to the source object, position information corresponding to the source object, searched source object related information and source object related information input by a user.

When the source object information includes multimedia information corresponding to the source object, the source object information may be obtained in the following manner: and acquiring multimedia information in real time through multimedia acquisition equipment, and taking the multimedia information acquired in real time as source object information.

The multimedia information corresponding to the source object may include, but is not limited to, at least one of the following: object images, object audio information, object video information, and the like.

The searched source object related information included in the source object information may be information related to the source object searched from the network side. For example, when the recognition result is inaccurate, or when the source object information is incomplete, the related information may be searched (from a network side search or from a locally stored resource), or the user may be prompted to input the source object related information in order to accurately recognize the source object to be translated.

The position information corresponding to the source object contained in the source object information can be used to optimize a processing model (such as an object alignment model) for translation, so as to obtain a more accurate translation result according to the position information.

Further, the source object information may further obtain information including user personal information, so as to perform personalized translation based on the user personal information.

The terminal comprises mobile devices such as a smart phone and a PAD (portable android device), and can also comprise wearable devices such as a smart watch and smart glasses.

After the source object information is acquired, image features and/or text features can be extracted based on the acquired source object information; and identifying the source object to be translated according to the extracted image features and/or text features. For example, the source object to be translated may be identified according to a preset object recognition model, the extracted image features and/or text features.

The object recognition model is constructed in advance, and specifically can be constructed in the following manner:

collecting multi-modal data of each sampling object aiming at a plurality of preset sampling objects; extracting image features and/or text features of a sampling object from multi-modal data; and learning and training to obtain an object recognition model based on the image features and the text features of the sampled object.

Wherein the multi-modal data of the sampled object comprises at least one of:

and sampling characters, pictures, audio, video and the like related to the object.

Further, the object recognition model may be pre-constructed by: and aiming at each object class, training to obtain an object recognition model corresponding to the object class based on the image features and/or text features of the sampling objects belonging to the object class.

In this way, after the image features and/or the text features are extracted based on the obtained source object information, the object class of the source object to be translated can be identified according to the extracted image features and/or text features; and identifying a source end object to be translated according to the object identification model corresponding to the object type and the extracted image feature and/or text feature.

The inventor of the invention finds that the machine translation quality has high dependence on the normalization degree of the input text and the scale of the database and the model, so that the conventional commercial machine translation technology generally needs the support of a network and a cloud service and text normalization processing. However, for many extreme conditions, such as limited network, insufficient power of the device, incomplete input text/voice/image information, etc., the current machine translation technology is not considered, which results in poor user experience.

Therefore, the invention can provide the optimization scheme of the object information translation method under the extreme conditions, including the scheme of solving the incomplete input object information, the scheme of saving flow and batteries, and the object information translation output scheme of the device under different conditions, and the details can be seen in the following seventh embodiment.

In consideration of the scene with incomplete information, the source object to be translated may not be identified based on the obtained source object information, or the identification result is inaccurate.

Therefore, in order to ensure the integrity of the input information of the object and the accuracy of the recognition result, in the first embodiment of the present invention, after the image feature and/or the text feature is extracted based on the obtained source object information, if the source object to be translated is not identified based on the extracted image feature and/or text feature, other information (such as text and/or image information) related to the source object to be translated may be obtained based on the extracted image feature and/or text feature. And then, accurately identifying the source end object to be translated according to a preset object identification model, the extracted image characteristics and/or text characteristics and the acquired character and/or picture information.

The text and/or picture information related to the source object to be translated can be acquired by at least one of the following modes:

performing a web search based on the extracted image features and/or text features;

based on the extracted image features and/or text features, matching and searching the locally pre-stored pictures and/or characters;

and acquiring source object related information input by a user, such as character and/or picture information.

For example, the terminal device converts the collected picture information, text information or audio information into text, and then performs object recognition; if the source end object information collected by the terminal equipment is not enough to identify the object, a network search engine is started to search and match the pictures and the characters to obtain more pictures and characters related to the object, then the obtained pictures and characters are identified after filtering and standardization processing, if the pictures and characters can not be identified, more pictures or characters are input, and the process is repeated until the source end object is identified, or the repetition frequency exceeds a threshold value. In practical application, aiming at the characteristics of dish translation, such as no product package of common dishes (cooked food), under extreme conditions, even only dish names are listed on a menu, and through the translation scheme aiming at incomplete input object information, matching and recognition of source-end dishes can be realized.

S102: and translating the identified source end object to obtain corresponding target end object information.

Specifically, the language environments corresponding to the source object and the target object respectively can be determined; and then, translating to obtain target end object information corresponding to the source end object based on the language environments respectively corresponding to the source end object and the target end object.

The language environment corresponding to the source end object is different from the language environment corresponding to the target end object.

In practical applications, the language environment may be expressed as a language category. For example, when text content exists in the source object and the target object, the language kind of the text content is different. Alternatively, when the source object and the destination object do not contain text, such as only a pattern, the language types of the environments in which the source object and the destination object are located are different, for example, traffic signs in different countries.

In practical application, after a source object is identified, the translation direction can be determined; and the translation direction can be determined according to the source end language of the source end object and the translated target end language. The source language of the source object specifically refers to a language environment corresponding to the source object; the target language specifically refers to a language environment corresponding to a target object corresponding to the source object.

In the first embodiment of the present invention, the language environment corresponding to the source object may be determined according to at least one of the following:

the language environment corresponding to the position detected by the position detection module, the language environment identified from the acquired source object information, the language environment corresponding to the source object searched in a preset object knowledge map database and the language environment set by the user. For example, the country and language of the source object are automatically detected by a GPS (Global Positioning System) module.

And the language environment corresponding to the target object can be determined according to at least one of the following items:

the language environment corresponding to the position detected by the position detection module, the pre-designated language environment, the language environment set by the user and the language environment determined based on the personal information of the user.

For example, the language environment of the system may be specified in advance as the language environment corresponding to the target object. Or, when the behavior that the user sets the target end language on the terminal equipment is detected, the target end language set by the user is used as the language environment corresponding to the target end object; and if the behavior that the user sets the target terminal language is not detected, the language environment of the default equipment system is the language environment corresponding to the target terminal object.

In the first embodiment of the present invention, after the language environments corresponding to the source object and the target object are respectively determined, the corresponding object alignment model may be selected based on the language environments corresponding to the source object and the target object, respectively.

And then, translating to obtain target end object information corresponding to the identified source end object based on the selected object alignment model. Specifically, a text feature and/or an image feature corresponding to the source object may be obtained based on the source object information; and translating to obtain target end object information corresponding to the identified source end object based on the acquired text characteristics and/or image characteristics and the selected object alignment model.

Wherein the object alignment model is pre-constructed by:

determining text features and/or image features between a sampling source end object and a corresponding sampling target end object; and training to obtain an object alignment model according to the determined text features and/or image features.

In practical application, multi-modal data of a plurality of sampling source end objects and multi-modal data of sampling target end objects corresponding to the sampling source end objects in different translation directions can be obtained in advance; extracting image features and/or text features from the multi-modal data; determining text features and/or image features between the sampling source end object and the sampling target end object based on the extracted image features and/or text features; and learning and training to obtain an object alignment model according to text features and/or image features between the sampling source end object and the sampling target end object. Wherein the multi-modal data comprises at least one of: text, pictures, audio and video related to the object.

Preferably, in order to improve the translation quality of the object, in the first embodiment of the present invention, the object alignment model is optimizable. Specifically, after the target end object information is obtained through the translation of the object information, the object alignment model may be optimized according to at least one of the following items of information:

and the user aims at the feedback information of the target end object information obtained by translation, the language environment corresponding to the source end object and the language environment corresponding to the target end object.

For example, user behavior data for target-side object information may be collected; and updating the optimized object alignment model by using the collected user behavior data as feedback information. Specifically, the position information of the source end object can be detected through the terminal device, whether the source end object is a source of the source end object is automatically judged according to the position characteristics, and if the source end object is the source of the source end object, the source end object is added into the position information in the knowledge graph of the source end object. Updating alignment model parameters by collecting a series of user log data such as click rate and approval rate of a user on a translation candidate list, if a certain target end object obtains more click rate or approval rate, increasing the alignment probability, otherwise, reducing the alignment probability.

Preferably, in the first embodiment of the present invention, when there are a plurality of source objects to be translated, after the source objects to be translated are identified, the category of each source object may also be detected. And if all the source end objects to be translated belong to the combined objects of the same category, all the source end objects are used as the source end combined objects for translation. If the source end objects to be translated do not belong to the same category, each source end object can be independently translated to obtain the information of the target end object corresponding to each source end object.

Specifically, when a plurality of source end objects belong to a plurality of combined objects of the same category, the target end object information corresponding to the source end object is obtained through translation, and the translation can be implemented by at least one of the following:

the method comprises the steps that corresponding target end composition body information is obtained through translation aiming at source end composition bodies corresponding to a plurality of source end bodies;

and respectively translating each source end object to obtain the corresponding target end object information.

In practical applications, one or more target end objects corresponding to the source end object may be used. Therefore, when there are a plurality of source objects, the related information of one or more target objects corresponding to each source object can be translated for each source object.

Preferably, in the first embodiment of the present invention, before the target object information corresponding to the source object is obtained through translation based on the obtained source object information, the translation intention of the user may be predicted, and the processing model for offline translation is stored in advance according to the translation intention of the user.

Thus, the target object information corresponding to the source object can be translated as follows: and translating to obtain target end object information corresponding to the source end object by utilizing the stored processing model based on the obtained source end object information.

The translation intention of the user can be predicted according to at least one item of information:

user schedule, user personal information (such as interests and hobbies), environment information of the equipment, and motion state of the equipment.

The predicted translation intent may include: the action track and/or the source object to be translated.

In practical applications, the processing model for offline translation may include: an object recognition model for off-line translation, an object knowledge map database, an object alignment model, and the like. The object knowledge map database for off-line use can be set according to the predicted object type and action track of the source object to be translated in the translation intention, and the object knowledge data of other object types with small relevance does not need to be downloaded, so that the scale of the database is reduced.

For example, a preset model and database in the system may be first compressed and filtered, and then the database may be classified according to the categories of objects. The terminal predicts the translation intention of the user in advance and downloads the relevant model and database which are filtered, compressed and classified in advance under the condition of not consuming the mobile network data traffic of the user.

Therefore, when the terminal device detects that the mobile network data flow is used at present, searching and translation can be preferentially considered according to the model and the database downloaded to the terminal device, communication between the terminal device and cloud services is reduced, meanwhile, because the scale of the database is greatly reduced, the searching space in the translation process is reduced, a large amount of operation is avoided, and therefore electric quantity is saved. The seventh embodiment will be supplemented with the object information translation scheme for saving the flow and the battery.

S103: and outputting the object information of the target end.

In the first embodiment of the present invention, the object information at the target end can be output in a multi-modal manner. For example, the target object information may be output in at least one of the following ways: text, image, audio, video. Further, the method for outputting the object information of the program target end can be adaptively adjusted based on at least one of the following items: user operation, environment of the terminal, network condition, terminal type and electric quantity condition.

Wherein the target object information includes and is not limited to at least one of: picture, name, location information, price information, category information, associated audio/video information, etc. of the target object.

In practical application, when the source object information is acquired by the terminal device, the translated target object information may be displayed and output in the following manner:

displaying source end object information and target end object information in a split screen mode; or

And displaying the target end object information in the capture area of the source end object information by utilizing the augmented reality display technology.

Preferably, in the first embodiment of the present invention, the target object information obtained by translation may be adaptively adjusted based on at least one of the following items, and the adjusted target object information may be output:

the device type of the device, the storage state of the device, the current network condition, the power state of the device, and the personal information of the user.

Preferably, in the first embodiment of the present invention, the object information at the target end can be obtained as follows: acquiring source end object information to be sent to a receiver; and translating to obtain target end object information corresponding to the source end object based on the language environment of the receiving party and the obtained source end object information. And then, the target object information obtained by translation can be sent to a receiving party.

Therefore, the translation method provided by the invention can be applied to the social network, and the self-adaptive object information translation is carried out in the social network. In the translation process, personal information of the social network user, such as a region where the social network user is located, nationality and the like, can be collected to determine the language environment corresponding to the source object and the language environment corresponding to the target object, that is, to determine the translation direction of the object information. Then, a corresponding translation result is given for each translation direction. The translation direction is determined by the language environment corresponding to the source end object and the language environment corresponding to the target end object; in a social process, for the same source object, the language environments of different target objects can be determined according to different regions and nationalities of the receivers viewing the translation result.

Regarding the scheme for translating the object information in the social network, see the fifth embodiment provided later.

In practical application, when a plurality of target objects are provided, the information of each target object can be output after being arranged and sorted. Specifically, the object information of each target end can be sorted according to at least one of the following items of information: correlation with a source object, user behavior for target object information, attribute information of a target object, and user personal information.

The user personal information includes information such as user preferences, and may also include information such as a user current location (which can be obtained from a device location). The attribute information of the target object may include attributes such as price; according to the personal information of the user and the attribute information of the target object, the purchase possibility of the user can be comprehensively judged.

For example, when there are a plurality of target objects, the plurality of target objects may be arranged based on the correlation with the source object, the frequency of user clicks, the ease of purchase, and the like, so that the target objects are arranged in order of priority. The click frequency of the user can be obtained by the following method: collecting operation logs of the user on the target end object, and extracting the number of clicks of the related object from the log data, thereby calculating the frequency of clicks of each target end object. The above method for acquiring the purchase difficulty level can comprehensively consider the distance between the current terminal position and the purchase position, or the purchase price.

In order to present the translation result to the user more intuitively, that is, to feed back and quickly respond to the user's needs in real time, in the first embodiment of the present invention, if multimedia information is acquired in real time by a multimedia acquisition device, and the multimedia information acquired in real time is used as source object information, then, in the case that there are a plurality of source objects, a selection operation of the user for target object information obtained by translation can also be detected; and positioning a source object corresponding to the target object information selected by the user in the multimedia information collected in real time. Or, a selection operation of a user for a source object may also be detected, and if a selection operation of a certain source object is detected, target object information corresponding to the source object selected by the user may be located according to a preset output mode.

Preferably, in order to improve translation quality, in the first embodiment of the present invention, a translation result with a crowdsourcing translation feedback weight may also be obtained based on the obtained source object information; and adjusting target end object information corresponding to the source end object according to the translation result with the crowdsourcing translation feedback weight.

The translation result with the crowdsourcing translation feedback weight can be obtained in the following way:

sending crowdsourcing translation requests aiming at source object information to a plurality of preset answering users; collecting translation results fed back by all answering users aiming at the crowdsourcing translation request; and determining the crowdsourcing translation feedback weight of each translation result according to the category of the answering user and the occurrence frequency of the translation result.

Preferably, in the first embodiment of the present invention, after the target object information is obtained, the output of the target object information may be adjusted based on a user operation on the source object information.

For example, when the user operation is specifically an activation operation of the browsing mode of the target object, the target object information may be output in the following manner: acquiring multimedia information in real time through multimedia acquisition equipment; and positioning a target end object corresponding to the source end object in the multimedia information acquired in real time. For example, capturing text and/or images of multiple objects in real time; and if the captured characters and/or images have the object matched with the target object, identifying and displaying the object by using a display mode corresponding to the browsing mode.

The source end objects are multiple, and when the user operation is specifically the activation operation of the multi-object interaction mode of the target end object, the selected target end object can be detected; and identifying and displaying the source end object corresponding to the selected target end object.

In consideration of the fact that the traditional machine translation technology usually only focuses on the translation itself, in real life, users often need not only translation, but also understanding of translated contents and acquisition of related knowledge. For example, inputting the names of foreign drugs, the traditional machine translation only translates according to the input, and does not further understand whether the translated contents correspond to a named entity, and if so, whether relevant useful information such as efficacy, dosage, applicable groups, or even purchasing modes needs to be given. What the user inputs is translated, and the passive translation mode cannot really meet the actual requirement of the user.

Preferably, in order to enable a user to better understand the translation content, the object information translation method provided by the first embodiment of the present invention may further include a scheme of derivative translation. Specifically, derived information associated with the target end object and/or derived information associated with the source end object may be obtained; and outputting the acquired derivative information.

In the first embodiment of the present invention, the derived information associated with the target object may be obtained by the following method: searching attribute information corresponding to preset attributes of the target object from a preset object knowledge map database; and confirming the searched attribute information as the derivative information related to the target end object.

Accordingly, the derived information associated with the source object can be obtained by: searching attribute information corresponding to a preset attribute of a source object from a preset object knowledge map database; and confirming the searched attribute information as the derivative information associated with the source object.

The preset attribute is determined according to the category of the target end object and/or the source end object.

For example, the derived information of the target-end object may include: and obtaining the key information related to the target object according to the object type of the target object through a pre-constructed object knowledge map database.

Further, the derived information obtained may further include: and comparing information between the source end object and the corresponding target end object, namely displaying the derived information of the source end object and the derived information of the target end object simultaneously. The comparison information is determined by the key information (attribute information of the preset attribute) respectively related to the source object and the corresponding target object. The key information related to the target end object and the key information related to the source end object can be obtained through a pre-constructed object knowledge map database.

Therefore, more preferably, in the first embodiment of the present invention, the derived information associated with the target end object may also be determined based on the position information corresponding to the target end object; and determining the derived information related to the source object based on the position information corresponding to the source object.

In the first embodiment of the present invention, if the derived information associated with the target end object and the derived information associated with the source end object are obtained, the derived information that needs to be highlighted can be located according to the correlation between the derived information associated with the target end object and the derived information associated with the source end object; and outputting the acquired derivative information, and highlighting the positioned derivative information.

Under the condition that a plurality of source end objects belong to a plurality of combined objects of the same category, acquiring and translating derivative information related to target end object information, wherein the derivative information includes at least one of the following items:

aiming at source end combined objects corresponding to a plurality of source end objects, acquiring derivative information associated with corresponding target end combined object information;

and respectively acquiring the derivative information associated with the corresponding target end object information for each source end object.

Further, in order to make the output result more meet the personalized requirements of the user, in the first embodiment of the present invention, after the derivative information is obtained, the obtained derivative information may be output in the following manner:

determining a language environment corresponding to the acquired derived information according to the personal information of the user;

and displaying the acquired derivative information based on the determined language environment.

Preferably, in the first embodiment of the present invention, the derived information that needs to be highlighted can be located according to the personal information of the user; the located derivative information is highlighted.

For example, the derived information of the target end object may be ranked according to the degree of association between the user personal information and the derived information of the target end object; and positioning the derived information of the target end object needing to be highlighted according to the rating.

Furthermore, the related reminding event can be generated or changed according to the personal information of the user and/or the acquired derivative information. Specifically, whether a reminding event related to a target end object exists in the terminal equipment can be detected; if not, a reminding event can be generated according to the personal information of the user and/or the acquired derivative information. And if so, changing the reminding event related to the target end object according to the personal information of the user and/or the acquired derivative information.

Wherein the user personal information may include at least one of: average work and rest time of the user, health state of the user, motion state of the user and the like; the derived information of the target end object comprises at least one of the following: time intervals, start and stop times, number of repetitions, etc. The average work and rest time of the user may be determined by at least one of: on-off time, user daily activity time. The user health status may be determined by at least one of: data recorded in health applications, blood pressure pulse sensors, etc. The user motion state may be determined by at least one of: the GPS records the motion trail, the gyroscope records the motion speed and the like.

In practical application, whether a reminding event related to a target end object exists in the terminal equipment can be detected in the following modes: and judging whether a reminding event related to the target end object exists or not according to the content correlation between the reminding event set in the terminal equipment and the target end object. The derived translation scheme for event reminding based on the personal information of the user will be described in detail in the following fifth embodiment.

Because different users vary widely in language, culture, and cognition, the geographic location of the same user when using machine translation on a mobile or wearable device also changes from time to time. In the object information translation method provided by the invention, the position information of translation occurrence and the personal information of the user are considered during translation, so that the translation result and the derived translation are closer to the actual requirements of the user.

The object-to-object translation method provided by the invention can be applied to terminal equipment. As used herein, a "terminal" or "terminal device" may be portable, transportable, installed in a vehicle (aeronautical, maritime, and/or land-based), or situated and/or configured to operate locally and/or in a distributed fashion at any other location(s) on earth and/or in space. As used herein, a "terminal" and a "terminal Device" may also be a communication terminal, such as a PDA (Personal Digital Assistant), an MID (Mobile Internet Device), and/or a smart phone, and may also be a wearable smart terminal, such as a smart watch and smart glasses.

In the object information translation method provided by the embodiment of the invention, the object to which the translation is directed is the object itself, not just the translation of characters, so that the range of machine translation objects is expanded, the applicability of the translation task is enhanced, and the translation requirement of a user on the object lacking in text description is met.

Furthermore, the scheme of the invention provides a concept of derivative translation, not only provides a translation result, but also analyzes the translation content, provides derivative information, supplements the translation content and enhances the understanding of the user to the translation content.

Example two

In the second embodiment of the present invention, the object information translation method provided in the first embodiment of the present invention will be described from the perspective of a single object to be translated (or referred to as single object translation) and from the perspective of a plurality of source objects to be translated (multi-object translation). For convenience of description, the "object information translation" may be referred to herein as an "object translation".

The following description will take the example that the Chinese user uses the object information translation scheme proposed by the present invention at the terminal device to translate the unfamiliar foreign goods. The terminal here may be a mobile terminal having a photographing function and a touch screen, and information related to translation of an object is displayed on the mobile terminal. The manner of display here may be, and is not limited to, displaying input information and output results simultaneously on one screen.

Taking fig. 2 as an example, the interface of the terminal device is divided into an upper part and a lower part, where the upper part is a source object capture area and the lower part is a target object display area. The source end object capturing area provides an entrance for object translation, simultaneously displays character translations in the image, the target end object displaying area mainly provides system output for object translation, and can collect behavior logs of a user on translation results as feedback.

One, one object translation

The single object translation process provided by the second embodiment of the invention can be divided into the following two steps:

s1: terminal equipment acquires source end object information

In the second embodiment of the present invention, the terminal device is used to obtain the image information of the source object for specific description. When detecting that the user starts the object translation function, the terminal equipment automatically or based on the user operation starts the equipment camera to acquire information such as images captured by the current view finder; further, thumbnails of the images can be displayed simultaneously; further, based on the detected confirmation operation by the user, information such as an image of the source object is determined.

Or, the information such as the image and the text captured by the current viewfinder can be transmitted to the server; further, the information transmitted to the server may be information processed (e.g., filtered and/or text recognized, etc.) by the terminal device; further, based on the detected confirmation operation of the user, information such as an image, text, and the like of the source object is determined.

For example, when the terminal device detects that the user starts object translation, the device camera is automatically started, a view frame appears in the center of a source object capturing area, if the device detects that the user clicks a 'confirm' button, an image captured by the current view frame is transmitted to the server, thumbnails of the image are displayed at the same time, the displayed position can be the side edge or the bottom edge of the source image capturing area, if the device detects that the user clicks a 'clear' button, all current thumbnails and the corresponding images uploaded to the server are cleared, and if the device detects that the user clicks the 'confirm' button for multiple times, the images are sequentially uploaded to the server, and the thumbnails are sequentially arranged.

The captured image may be viewed or deleted through the thumbnail. When the operation that a user selects a certain thumbnail is detected, the image corresponding to the thumbnail is transmitted to the terminal from the server and displayed in the view frame, the camera is temporarily closed at the moment, if the user is detected to press the image for a long time, an option whether the user deletes the image or not is given, if the user is detected to select the deletion, the deletion operation is activated, the thumbnail is deleted at the terminal, and meanwhile, the corresponding image is deleted at the server.

In the stage of temporarily turning off the camera, the terminal provides a way to turn on the camera, which may be to place a return button in a suitable area of the screen, or may be to click a certain area of the screen many times, or other interrupt mechanisms.

S2: displaying translated target-side object information

The main task of this step is to display the translation results at the terminal. The translation results include, but are not limited to, at least one of: the system comprises the following steps of (1) pictures, names, position information, price information, category information, related audio/video and the like of target end objects, wherein the number of the target end objects can be one or more; further, the text translation in the source object capture area can also be included.

The display mode comprises the following steps: the related content of the listed target objects may be output in the target object display area, and the target object display area may further include a plurality of target object candidates, which may be arranged based on the relevance to the source object, the user click frequency, the difficulty level of purchase, and the like, so as to arrange the target object candidates in order of priority. The method for acquiring the user click frequency is to collect operation logs of the user on the target end object, extract the number of times of clicking on the relevant object from log data, and calculate the click frequency of each target end object. The method for acquiring the purchase difficulty level can comprehensively consider the distance between the current terminal position and the purchase position, or the purchase price and the like.

In the source object capture area, if the source object capture area is currently in a thumbnail view state, namely a camera closing state, a text translation obtained through the following steps of analyzing, recognizing and translating an object is displayed in an image in a view frame, and if the source object capture area is currently in a camera opening state, a translated text obtained through the following steps of analyzing, recognizing an object and translating the object is directly displayed in the view frame in an AR (Augmented Reality) mode.

The above translation result is realized by the following steps based on analysis, recognition of the object and translation.

Analyzing objects in the image, comprising: identifying objects in the image by image feature extraction, character identification and the like; then the characters in the image are translated from the source language to the target language, and the source object in the image is translated to the target object.

Through the above S1, the terminal device acquires the picture of the object, or the terminal device transmits the picture of the object to the translation system in the server, first subjected to object recognition. Extracting image features from the image by referring to the features required by the object Recognition model, performing OCR (Optical Character Recognition), extracting text features, and recognizing the object according to the object Recognition model to obtain a source object to be translated and characters in the image; and then, translating the source end object into a candidate target end object according to the scores through an object alignment model, and translating the source characters in the image into a target end language through a text translation model and a language model.

The method for determining the source language at least comprises the following steps: the terminal equipment GPS module automatically detects the country and language of the source end object, can also automatically detect the language of characters in the image, can be determined by searching the source of the source end object in an object knowledge map database after identifying the source end object, and can also be used for setting the language of the source end by a user. The method for determining the language of the target end object at least comprises the following steps: the terminal device GPS module automatically detects the country and language of the source object, or may determine the language used by the user based on the user personal information, for example, the language used by the user is determined according to the nationality of the user using the terminal device, so as to obtain the target language, or may set the language for the user, such as detecting a behavior of setting the target language by the user on the terminal device, taking the target language set by the user as the language of the target object, and if the behavior of setting the target language by the user is not detected, then taking a pre-specified language (such as a default device system language type) as the language of the target object. As shown in fig. 2, the upper right of the display screen displays the languages of the source end and the target end, and detects whether the user has set the languages, if the terminal does not detect the user's setting, the source end language is displayed to the left and the target end language is displayed to the right according to the above method, and if the terminal detects that the user has set the target end language to the chinese, the languages of the target end object should be configured to the chinese accordingly.

In the second embodiment of the present invention, the object recognition model, the object alignment model, and the object knowledge map database are constructed as follows: firstly, multi-modal data of large-scale source end objects and target end objects, including pictures and characters, are collected, text features and image features are extracted, an object category discrimination model and an object recognition model in each category are built through a supervised learning method, then text description similarity scores of the source end objects and the target end objects are calculated according to an existing translation model (an object alignment model) between the source end languages and the target end languages, finally the text features and the image features are extracted, and the object alignment model is obtained through a supervised learning method. The entity and relationship attributes of the object are extracted from the collected multimodal data of the object in large scale and expressed by object triplets < entity, relationship >. And then establishing an object search index according to a preset object category to form object knowledge graph data. The "relationship" here has a corresponding specific meaning for different objects, for example, for the knowledge that "the color of object a is blue", a triplet "< a, blue, color >" can be established. In the invention, the relationship attributes of different objects are introduced in detail in the derivative translation, and in the scheme of the derivative translation, the relationship is represented as the key information of the object, namely the attribute information of the preset attribute. For example, the key information of the medicine includes efficacy, side effects, dosage, contraindicated population, acquisition mode and the like; the key information of the food comprises ingredients, allergens, calories, applicable people, tastes and the like, wherein the key information of the packaged food also comprises the production place, the production date, the shelf life and the like, and the key information of the unpackaged dishes (meals) or recipes also comprises the tastes, smells, temperatures, colors, eating methods, cooking methods and the like; the key information of the cosmetics comprises components, production places, effects, applicable groups, quality guarantee period, use modes and the like.

Two, multiple object translation

In the multi-object translation scheme provided by the second embodiment of the present invention, a terminal device uploads an image of a source object, or a camera of the terminal device captures images of a plurality of source objects, and the plurality of source objects are identified by an object image and/or characters on the object image; then, automatically classifying the recognition results, if the combined objects belong to the same class in predefined object classes, firstly, all source end objects can be used as the combined objects for translation, and then, each single object in the combined objects can be translated; if the objects in the predefined object categories do not belong to the same category, each object is directly translated; and finally, displaying the target end object which is sequenced according to translation scores and corresponds to each source end object at the terminal. And if the terminal equipment captures the behavior of a certain object in the view finder of the user, adjusting the translation result in real time at the terminal to independently display the target end object corresponding to the source end object, and sorting according to the translation score.

Specifically, the process of multi-object translation provided by the second embodiment of the present invention can be divided into the following two steps:

s3: obtaining source object information

The terminal device captures images of a plurality of source objects by the camera, and the terminal operates in the same manner as in step S1 in the second embodiment. As shown in fig. 3, the terminal apparatus captures images of a plurality of remote controllers.

S4: displaying translated target object information

If the source object is determined to belong to a plurality of identical objects, it means that a plurality of identical single objects are input, and only this single object is interpreted and output according to step S103 of the first embodiment or displayed according to step S2 of the second embodiment of the present invention.

If the source end objects belong to the same type of combined objects, namely the combined objects in the same type in the predefined object types are judged, all the source end objects are translated as the combined objects, then each single object is translated, the translation results of the combined objects are displayed firstly when the translation results are displayed, and then the translation results of each object are displayed independently.

And if the source end object has no obvious combination relation, directly translating each object, and then displaying the translation result of each object.

If the two latter cases are the case, the third embodiment of the present invention provides an interactive mode of multi-object input, where a terminal device captures an operation of a source object in a view finder box of a user, and then adjusts a translation result in real time at the terminal, where the adjustment mode includes, but is not limited to: and independently displaying the target end object which corresponds to the source end object and is sorted according to the translation score, or identifying the target end object sequence of the source end object by means of top ranking, highlight, frame selection and the like.

In the latter two cases, the third embodiment of the present invention further provides an interactive mode of multi-object input by using an augmented reality display technology, so that an object that a user wants to find can be directly displayed in the source object capture area.

In the second embodiment of the present invention, a method for determining whether multiple objects at a source end belong to the same category includes: searching the name of each object in an object knowledge map database, and 1) if the names of the objects are completely the same, performing input processing according to a single object; 2) if the names of the objects have the same brand or the same category prefix, processing the objects according to the same category, such as 'Friso milk powder 1 section' and 'Friso milk powder 2 section', and 3) analyzing picture characteristics and character characteristics of input objects, respectively identifying the input objects and finding corresponding positions from a knowledge graph, and if the input objects and the character characteristics belong to sub-nodes under the same brand, such as face cream and emulsion under a certain brand name, combining the objects according to the same category for processing. Except for these cases, they are treated according to different categories.

In the second technical scheme of the embodiment of the invention, objects which are unfamiliar with foreign countries, such as medicines, foods, cosmetics, road sign indication boards and the like, are translated into corresponding objects which are familiar to users in China or translation directions are exchanged, the objects in China are translated into corresponding objects in a target country, the range of machine translation objects is expanded, the applicability of translation tasks is enhanced, and the translation requirements of users on objects lacking in text description are met.

EXAMPLE III

In order to more intuitively present the translation result to the user, namely, immediately feed back the user requirement and quickly respond, the third embodiment of the present invention provides an interactive object information translation scheme by means of augmented reality.

It should be noted that the augmented reality interactive translation method described herein is not only applicable to object-to-object machine translation, but may also constitute an improvement over other conventional translation methods, such as a method of direct translation based on characters recognized in image information of a source object.

The augmented reality interactive object translation is divided into the following three steps:

the method comprises the following steps: and acquiring source object information.

Step two: and displaying the translated target end object information.

Step three: and the terminal makes a corresponding response according to the detected user operation.

In the third embodiment of the present invention, the implementation of the first step and the second step can refer to the steps S1 and S2 of the second embodiment, which are not repeated herein.

When the source object information contains multimedia information corresponding to the source object, the source object information can be obtained in the following manner: and acquiring multimedia information in real time through multimedia acquisition equipment, and taking the multimedia information acquired in real time as source object information.

The user actions detected in step three include, but are not limited to, the following: clicking, long-pressing, or frame-selecting information (e.g., recognized text) in the associated image in the source capture area; clicking, long-pressing or framing a certain target object in the target display area; clicking a button that initiates the interactive mode, etc. Responses corresponding to user actions include, but are not limited to: the adaptive adjustment is performed based on the detected source object information, such as identifying a related source object and highlighting characters identified by a related image, or the display mode of the source object is adjusted based on the operation of the target object, or the display mode of the target object is adjusted based on the operation of the source object, such as displaying by means of an augmented reality technology.

Further, the third step can be, but is not limited to, at least one of the following three ways: direct translation for a source object, localization for a target object, localization for a source object.

The first method is as follows: direct translation for source object

And adjusting the display of the source end and the target end according to the detected operation on the source end object capture area. Specifically, firstly, a terminal detects an operation of a source capture area, and if it is detected that a user performs operations such as clicking, zooming, long-pressing and the like on a certain partial area on an object, a text recognition result and a text translation result of the area are displayed in a source view-finding frame in a highlighted manner, wherein the display manner includes: zoom in on the area, highlight and/or frame the area, etc. As shown in fig. 2, when the terminal detects that the user clicks, long presses, or zooms in the "ingredients" area in the source viewfinder, the "ingredients" area presented in the source viewfinder will be enlarged, and all the text translations in the area will be displayed.

The second method comprises the following steps: positioning of target end object

And adjusting the display of the source end and/or the target end according to the detected operation of the user on the target end object display area. Specifically, the terminal detects an object display area of the target, and starts an interactive display mode of the target augmented reality if it is detected that the user activates an interactive display mode of a certain target object. The description will be given by taking fig. 4 as an example. In the legend, a Chinese user wants to buy a certain milk powder sold in the netherlands, but the user only has a picture of the milk powder sold in China, and when the user arrives at the netherlands, the user can easily and quickly find out the locally produced milk powder commodity through the augmented reality interactive object information translation method.

Specifically, for example, the processing is first performed through the above-mentioned step one and step two, that is: the terminal device detects that the user uploads the image of the milk powder of the source object, and then the image is analyzed, identified and translated at the server end to obtain the image of the object corresponding to the milk powder in the Netherlands and the related text description (namely the target object).

Further, in step three, the present invention provides the user with a selection of "browse mode" in the display area of the translation result. The activation of the "browse mode" includes, but is not limited to, one of the following: a browse button is provided beside the corresponding object picture on the screen, and the browse button can be activated by associating long-time pressing, double-click on the object picture and other operations, and can also be activated by a certain preset voice command.

When the terminal equipment detects that the user activates the 'browsing mode', a camera of the terminal equipment is started, and the screen occupation ratio of a view frame on a screen is adjusted to be a proper ratio. The terminal uploads an image captured by the camera in real time, the server performs multi-target recognition on the image, then the image is matched with a target object, and if a certain object in the image can be matched with the target object, the object is highlighted, selected and/or identified in a view-finding frame in an augmented reality mode.

In the third embodiment of the present invention, a plurality of objects (a plurality of different or identical objects, the specific determination method is referred to as the second embodiment) in the image uploaded by the terminal in real time may be referred to as candidate objects, and the translated target object is referred to as a reference object. The matching method mentioned above specifically includes: extracting image features and/or text features of the target object from an object knowledge map database; extracting image features of the candidate object from an object knowledge map database, performing character recognition on the image of the candidate object, and extracting text features; and then, carrying out similarity measurement on the image features and/or text features of the candidate object and the target object, wherein the similarity calculation method can be a cosine similarity method, a kernel method or a calculation method based on distributed representation.

The third method comprises the following steps: localization of source object

When a plurality of input source end objects are provided, the selection operation of a user for the target end object information obtained by translation can be detected; and positioning a source object corresponding to the target object information selected by the user in the multimedia information collected in real time.

Specifically, the display mode of the source object capture area may be adjusted according to the operation of the user on the target object display area detected by the terminal, for example, an object that the user wants to find is directly displayed in the source object capture area, which is described with reference to fig. 3 as an example.

The terminal device captures images of a plurality of source end objects through the camera, and displays a translation result in a target end object display area after the recognition, the category judgment and the translation of the plurality of objects are carried out according to the multi-object translation scheme provided by the second implementation. The terminal detects that the user activates a multi-object interaction mode, where the activation mode includes but is not limited to: long-time pressing or double-clicking a certain target object image and the like, and inputting a certain target object name and the like by voice. The terminal activates a multi-object interaction mode, detects a selected target object, and highlights, frames and/or other identification modes of the source object corresponding to the selected target object in a view finder of a source object capture area.

Specifically, as shown in fig. 3, when the terminal detects that the user activates the multi-object interaction mode, and detects a selected target object, the terminal performs highlighting or frame selection on the source object corresponding to the selected target object in a view finder of the source object capture area.

After judging that the language type of the source remote controllers is Korean, the types of the source remote controllers are television remote controllers, air conditioner remote controllers, microwave oven remote controllers and the like, and the source remote controllers do not belong to the same type, each object of the source remote controllers is translated into an object of a target language respectively. See example two for a method for determining whether they belong to the same class.

Example four

In consideration of the fact that the traditional machine translation technology usually only focuses on the translation itself, in real life, users often need not only translation, but also understanding of translated contents and acquisition of related knowledge. For example, inputting the names of foreign drugs, the traditional machine translation only translates according to the input, and does not further understand whether the translated contents correspond to a named entity, and if so, whether relevant useful information such as efficacy, dosage, applicable groups, or even purchasing modes needs to be given. What the user inputs is translated, and the passive translation mode cannot really meet the actual requirement of the user.

Therefore, in order to enable a user to better understand the translation content, the fourth embodiment of the present invention provides an object information translation scheme for providing derivative translation. Specifically, derived information associated with the target end object and/or derived information associated with the source end object may be obtained; and outputting the acquired derivative information.

In practical application, attribute information corresponding to preset attributes of a target object can be searched from a preset object knowledge map database; and confirming the searched attribute information as the derivative information related to the target end object. Searching attribute information corresponding to a preset attribute of a source object from a preset object knowledge map database; and confirming the searched attribute information as the derivative information associated with the source object. The preset attribute is determined according to the category of the target end object and/or the source end object.

In the fourth embodiment of the present invention, the preset attribute of the object may be a "relationship" attribute, and the attribute information corresponding to the preset attribute is represented as "key information" of the object. For example, the key information of the medicine includes efficacy, side effects, dosage, contraindicated population, acquisition mode and the like; the key information of the food comprises ingredients, allergens, calories, applicable people, tastes and the like, wherein the key information of the packaged food also comprises the production place, the production date, the shelf life and the like, and the key information of the unpackaged dishes (meals) or recipes also comprises the tastes, smells, temperatures, colors, eating methods, cooking methods and the like; the key information of the cosmetics comprises components, production places, effects, applicable groups, quality guarantee period, use modes and the like.

In the fourth embodiment of the present invention, the key information related to the translated object and the comparison information between the source object and the target object are used as the derived information to describe the scheme of translating the object information providing the derived translation.

The key information and the comparison information of the single object and the multiple objects are described below, respectively.

Derived translation of single objects

The process of generating a derivative translation for an input single object can be divided into the following steps:

the method comprises the following steps: obtaining source object information

The step of obtaining source object information in the fourth embodiment of the present invention may refer to the step of obtaining source object information in the second embodiment. The terminal captures the source object picture through the camera, or the terminal transmits the existing source object picture (for example, obtained from a network or other devices) to the translation system of the local machine or the server.

For example, as shown in FIG. 5, the lower right corner of the source object capture area provides an "Upload" button, a user in the United states who is traveling in China wants to buy a cold medicine, the terminal detects that he has entered his own picture of the cold medicine, and sets the target language to Chinese. The terminal transmits the picture of the cold medicine at the source end to the server and tells the server that the target language is Chinese, namely, the cold medicine in the United states is translated into the cold medicine in the China.

Step two: displaying translated target object information

In the fourth embodiment, after the target end object information corresponding to the source end object is obtained through translation, derived information associated with the target end object and/or derived information associated with the source end object can be obtained; and outputting the acquired derivative information.

The derived information of the target object can be displayed in the target object display area of the screen in a manner including, but not limited to, at least one of: the method comprises the steps of displaying the information item by item in a text list mode, representing certain derivative information in a picture mode, linking other applications of the equipment in a hyperlink mode, such as map navigation, and broadcasting and displaying the derivative information in an audio/video mode.

Further, the steps of analyzing, identifying the object and generating the derivative information are as follows:

first, after analyzing and recognizing the object according to the method mentioned in the second embodiment, the object is translated into the target object, and relevant key information is obtained from the knowledge graph of the target object.

The construction method of the knowledge graph comprises the following steps: on the basis of the object knowledge map database constructed in the second embodiment, the key information of each object is extracted and marked. The specific method is to label the key information of the object with derivative information according to the category of each object. The key information described here relates to object categories such as: the key information of the medicine comprises efficacy, side effects, dosage, contraindicated population, acquisition mode and the like; the key information of the food comprises ingredients, allergens, calories, applicable people, tastes and the like, wherein the key information of the packaged food also comprises the production place, the production date, the shelf life and the like, and the key information of the unpackaged dishes (meals) or recipes also comprises the tastes, smells, temperatures, colors, eating methods, cooking methods and the like; the key information of the cosmetics comprises components, production places, effects, applicable groups, quality guarantee period, use modes and the like.

In practical application, the derived information related to the target end object can be determined based on the position information corresponding to the target end object.

Second, a derivative translation is generated from the target language. After the derived information of the target object is obtained, the translation system starts a text translation module, if the language of the derived information is the same as the system language of the terminal equipment, the derived information is output to the terminal equipment, and if the language of the derived information is different from the system language of the terminal equipment (or the target output language set by the user), the derived information is translated into the system language of the terminal equipment (or the target output language set by the user) and then output to the terminal equipment. As shown in fig. 5, the derivative information of the chinese cold drug is chinese, which is not consistent with the equipment system language of the us user, and needs to be translated into the system language of the user.

Second, automatic comparison of source end and target end object

In order to better let the user understand the difference between the source object and the target object after translating the source object into the target object, a fourth embodiment of the present invention further provides a method for comparing two end objects, which is implemented by the following steps:

the method comprises the following steps: start-up contrast mode

After the translation system translates the source object into the target object and displays the target object to the terminal device according to the steps S1 and S2 in the second embodiment, the terminal device is provided with a mechanism for activating the contrast mode, which includes but is not limited to one of the following: the contrast button may be set near the target object in the target object display area, or may be activated by a certain gesture operation, or may be activated by a preset voice command, or may be automatically triggered by the system.

And if the terminal detects that the user activates the comparison mode of the source end object and the target end object, sending a comparison command of the current source end object and the target end object to the server.

Step two: displaying contrast information at a terminal device

The display mode of the terminal device includes but is not limited to one of the following: the terminal screen is displayed in a single screen mode, wherein the horizontal screen can be displayed left and right as shown in figure 6, and the vertical screen can be displayed up and down; the terminal can also perform two-screen split display, wherein one screen displays source end key information, the other screen displays all candidate target end object key information, and candidate target objects are switched in modes of up-down rolling, left-right sliding and the like; the key information of the source end object and the target end object can be displayed in a voice broadcasting mode. Further, the difference information may be highlighted or framed, for example, different amounts, usages, and the like of the left and right side medicines in fig. 6 may be highlighted.

Further, the method for acquiring the comparison information is as follows:

and obtaining the key information of the target end object through the step two in the single-object derivative translation scheme, wherein the key information obtaining method of the source end object is similar to the key information obtaining method of the target end object. Specifically, the following are described: searching the source end object from the object knowledge map database, and extracting corresponding information of the source end object by contrasting key information of the target end object, for example, if the target end object is a medicine, the key information is efficacy, side effect, dosage, prohibited population and acquisition mode, the efficacy, side effect, dosage, prohibited population and acquisition mode of the source end object are searched in the knowledge map.

Three, multiple object derived translation

When it is detected that a plurality of objects are input, the generation and processing of the derived information are as follows:

the method comprises the following steps: terminal equipment captures a plurality of source end object images

This step is identical to step S3 of the two-object and multi-object translation in the second embodiment, and is not described here again.

Step two: terminal display derived translation

The step is added with the display of the derivative translation of the terminal device on the basis of the second embodiment, and the adding mode is as described in the second step in the automatic comparison scheme of the source end object and the target end object.

The method for analyzing, identifying, generating and translating the derived information of multiple objects here is as follows:

first, the server performs multi-object recognition on the images, and the recognition method is as described in steps S2 and S4 in embodiment two. After the recognition is finished, the translation system judges the recognized object: if the two are identical, a plurality of identical single objects are input, and the translation system only generates derived information for the target end object of the single object; if the objects belong to the same class in predefined object classes, all source end objects are taken as combined objects, derivative information is generated for translated target end objects, and then derivative information is generated for the target end object of each single object; if the objects in the predefined object class do not belong to the same class, the derivative information is directly generated for the target end object of each object.

Searching the name of each object in an object knowledge map database, 1) processing the objects according to single object input if the names of the objects are completely the same, 2) processing the objects according to the same category (for example, combining the objects) if the names of the objects have the same brand or category prefix, such as two source objects in fig. 7, 3) analyzing the picture characteristics and the character characteristics of the input objects, respectively identifying the objects and searching the corresponding positions from a knowledge map, and processing the objects according to the same category if the objects belong to sub-nodes under the same brand, such as face cream and emulsion under a certain brand name. Except for these cases, they are treated according to different categories.

And secondly, generating a derivative translation according to the target language, wherein the specific method is the same as the translation method in the second step in the automatic comparison scheme of the source end object and the target end object, and the detailed description is omitted here. As shown in fig. 7, the derived information of the cosmetics includes, but is not limited to: manufacturer (or place of origin), efficacy (or function), procedure (or mode of use), recommended purchase location, etc.

Automatic comparison of source end and target end objects of multiple objects

If the input is a plurality of objects, the comparison process of the source-end multi-object and the translation result is performed according to the following processing mode:

the method comprises the following steps: start-up contrast mode

If the input multiple objects belong to a combined object, the system will provide a combined object comparison mode, activated in the same way as the single object input. Specifically, the method comprises the following steps: the contrast button may be set near the combined target object in the target object display area, or may be activated by a certain gesture operation, or may be activated by a preset voice command, or may be automatically triggered by the system.

If the input multiple objects do not belong to the combined object, a single object comparison mode is provided for each source end object and the translation object thereof, and the activation mode refers to the step in the scheme of "automatic comparison of the source end and the target end object", and is not described herein again.

Step two: displaying contrast information at a terminal device

Displaying at the terminal in a manner similar to the second step of the automatic comparing scheme of the source and target objects, specifically including but not limited to one of the following: the terminal screen is displayed in a single screen mode, wherein the horizontal screen can be displayed left and right as shown in figure 8, and the vertical screen can be displayed up and down; the terminal can also perform two-screen split display, wherein one screen displays source end key information, the other screen displays all candidate target end object key information, and candidate target objects are switched in modes of up-down rolling, left-right sliding and the like; the key information of the source end object and the target end object can be displayed in a voice broadcasting mode. Further, the difference information may be highlighted or framed, for example, different amounts, usages, and the like of the left and right cosmetics in fig. 8 may be highlighted.

If the comparison mode is the comparison mode of the combined object, the object contained in the source end and the object contained in the target end are respectively displayed on the comparison interface, and then the derived information of the combined object is compared. If the comparison mode of a plurality of single objects is adopted, the key derivative information of the objects of each source end and each target end needs to be compared on a comparison interface.

The method for comparing information and the content of comparison are as follows: if the terminal sends a comparison mode for starting the combined object, the combined source object is searched from the knowledge graph, and corresponding information of the combined source object is extracted by contrasting key information of the combined target object. Objects included in the combination source-end object and the combination target-end object are listed. The specific operation method comprises the following steps: first, the combined objects of the source end and the target end are paired, and the pairing method can be used for calculating the similarity of the characteristics such as names, effects, components and the like between every two objects. If the source and target compositions contain unpaired objects, the highlighting is performed in a manner including, but not limited to, one of the following: bold, italic, highlight, etc. And if the terminal sends a comparison mode for starting a plurality of single objects, according to the pairing result, and if an object is successfully paired, comparing and extracting the key information according to the second step in the automatic comparison scheme of the source end object and the target end object.

Specifically, the method comprises the following steps: searching the source object from the knowledge graph database, and extracting corresponding information of the source object by referring to key information of the target object, for example, as shown in fig. 8, the target object is a cosmetic set, the key information is a place of production, an action, an applicable group, a use mode, a use step, and an acquisition store, and then searching the place of production, the action, the applicable group, the use mode, the use step, and the acquisition store of the source object from the knowledge graph. If the language of the source object in the knowledge graph is different from the language of the terminal equipment system (or the target output language set by the user), the source object is translated into the language of the terminal equipment system. Then, the key information of the source object and the target object are transmitted to the terminal device for display in the terminal device system language, in the example of fig. 8, the similarity between two object pairs of the source object and the target object is calculated and paired according to the method, if the combined object of the source object and the target object contains an unmatched object, such as "eye cream" or "makeup remover oil" in fig. 8, the unmatched object is displayed in a highlight, italic or other manner when the types of the objects contained in the two sets are displayed, further, the contents of the using steps of the set of the source object and the target object can be displayed, if the terminal sends a command of a single object comparison mode, the similarity between two object pairs of the source object and the target object is calculated and paired according to the method, and then the paired object pairs are displayed in a comparison manner.

Because passive translation is far from meeting the requirements of users, the fourth embodiment of the invention provides a concept of derivative translation, not only provides a translation result, but also analyzes the translation content, provides derivative information related to the translation content, can supplement the translation content and enhances the understanding of the user on the translation content.

EXAMPLE five

Personalized object information translation

In order to enable the translation result to better meet the personalized requirements of the user, the fifth embodiment of the invention provides an object information translation scheme based on the personal information of the user.

For different users, the object information translation method provided by the fifth embodiment of the present invention specifically includes the following two steps of supplementing personalized derivative information.

The method comprises the following steps: obtaining source object information

Further, the acquired information may also include user personal information, and the acquisition mode includes, but is not limited to, the following modes: the terminal acquires user schedule information by analyzing information recorded by a user calendar and/or memo software; the terminal acquires the information related to the interests and hobbies of the user by analyzing the information recorded by the mailbox, the short message and/or the call record of the user; the terminal acquires the environmental information of a user through a camera, a microphone, a sound sensor, a light sensor and/or a humidity sensor; the terminal acquires information such as the motion state of a user through a speed sensor and the like; the terminal acquires information such as the health state of the user through application software and the like.

Step two: displaying translated target object information

Furthermore, before the translated object information of the target end is displayed, the derived information is adjusted according to the personal information of the user, and after the terminal collects the personal information of the user, the terminal is combined with the current derived translation to grade and label the content of the derived translation. The specific rating and labeling method is as follows:

and analyzing the relevance between the content of the derivative translation and the user information, scoring the content of the derivative translation according to the relevance of the user information, and highlighting the derivative translation content with high relevance. The correlation degree analysis can be realized by methods of calculating information entropy, KL (Kullback Leibler divergence, relative entropy) divergence and the like.

As shown in the example of fig. 9, if the terminal detects that there is diet and fat-reducing content on the user's calendar according to the method of step one, and records allergy to peanut and the like in the health application, for this user, the system will highlight the allergen, calorie and the like at the highlighted position of the derived information translated by the object, so as to remind the user to pay attention, as shown in fig. 9, in the derived information area on the terminal screen, such sensitive content will be advanced and highlighted. Display modes include, but are not limited to: the display sequence, the highlight and the thickening of the contents are exchanged on the terminal screen, and the user can be reminded in modes of vibration, sound emission and the like of the terminal.

Optimizing object information translation based on position

In consideration, the existing machine translation technology does not fully consider the position information and the personalized information of the mobile device user, and the translated text is not adjusted according to the translation occasion or the user requirement. The understanding of a concept by users in different regions may vary greatly, for example, "sunscreens" is both a sunscreen and a sun cream in english, and is commonly referred to as sunscreen for the chinese market, and thus is commonly translated into sunscreen in china. However, for Chinese people who travel in European and American countries, if they are still translated into sunscreen without distinction, they are likely to cause misunderstanding of users, and thus wrong purchasing behavior is generated.

Therefore, in the fifth embodiment of the present invention, the location of the user may be collected by the device sensor, and the object alignment model for object translation is optimized according to different locations of the user, so as to improve the accuracy of object information translation, where the specific process includes:

the method comprises the following steps: obtaining source object information

Further, the acquired information also comprises information of nationality, nationality and the like of the acquired user. The terminal judges the nationality and nationality information of the user by analyzing the position information of the user frequently and the language used by the user.

Step two: displaying translated target object information

When the system translates the object information, the system optimizes the object translation result by combining the information of user position, nationality and the like. First we define an object ambiguity triplet < source object, source language, target language >: if the target end object candidate corresponding to the translation of the source end object does not belong to the same object completely, the source end object is ambiguous at the target language end, and if the Sunscreen is translated into the sunscreen or the sun cream, the object represented by the object ambiguous triple < 'Sunscreen', English and Chinese is formed.

After determining the translation directions of the source end and the target end through step S2 in the second embodiment, the terminal determines whether the current source end object is ambiguous according to the object ambiguity library, and if so, optimizes the translation model (i.e., the object alignment model) according to the ambiguity information about the source end object in the knowledge map database. The optimization method comprises the steps of extracting text characteristics of a source-end object and scoring translation candidates by using an object ambiguity classifier. The text features here include: derived information of the source object, such as derived information "origin: australia, sun protection factor: 4, efficacy: tanning aid, etc. The construction method of the object ambiguity analysis classifier comprises the following steps: and extracting the text characteristics, classifying and training the translation candidates by adopting a supervised discriminant model training method to obtain scores of the same ambiguous object corresponding to different translation candidates under different text characteristics, and adding the scores into the translation model as characteristics so as to help the translation system to optimize the final translation candidate. As shown in fig. 10, according to the nationality of the user, the system automatically determines whether to adopt an ambiguity classifier, if the user is a european person, the translation result is directly provided, and if the user is an asian person, the translation system adds the ambiguity classifier to score as a feature, disambiguates the translation result, and outputs the translation result "sun cream". Or to alert the user in the derived information that the translation of the object may in turn be ambiguous. The reminding mode includes but is not limited to: the content of the related ambiguity is highlighted, thickened and the like, a warning icon can be added, a warning window can pop up on a terminal screen, and the terminal can remind through voice.

In the method, the construction of the object ambiguity library and the automatic judgment method of the ambiguous object are as follows: based on the object alignment model building method described in step S2 of the second embodiment, the category analysis is performed on the target end object candidate corresponding to each source end object, and the analysis method is as described in step S4 of the second embodiment. If the target candidates do not belong to the same category completely, the triple < source object, source language, target language > is added to the object ambiguity library. When whether a certain translated object is an ambiguous object is inquired, judging by inquiring whether the triplets of the object are in an object ambiguity library, if so, returning the value to be positive, otherwise, returning the value to be negative.

The knowledge graph construction method with the ambiguous information prompt comprises the following steps: on the basis of the knowledge graph construction method in the second embodiment, the target end object corresponding to the ambiguous object is searched through the object ambiguity library and the alignment model, and the ambiguity mark is added to the knowledge graph.

Third, translation adaptation in social networks

The fifth embodiment of the present invention further provides a scheme for translating object information in a social network, including the following steps:

the method comprises the following steps: obtaining source object information

The invention also provides a portal for starting the group chat function in the object translation system and/or starting the object translation in the social software. The starting mode comprises the following steps: starting object translation in modes of long pressing, sliding and the like, and then uploading source object picture information to a server or a local object translation system according to the step S1 of the second embodiment; or, a translation button is arranged in a social software interface, and object translation is started according to user operation, wherein the social software can be group chat software/application or software/application with a conversation function.

Step two: outputting corresponding translation results for each person

In practical application, the explanation can be divided into two cases according to whether single-side terminal translation is used:

1) based on the information of the receiver, the sender requests the system to translate the object and then send the picture.

The sender collects the position information of the receiver through a terminal GPS/wifi/3G and a social network, and therefore the region and the nationality of the target object are determined. According to the position of each terminal, a request is sent to a translation system, a source end object is translated into a corresponding target end object, and a derivative translation is generated. The translation results are then sent to the various recipients. The translation result may include text, picture, audio, and video.

2) And sending the object picture, and requesting the system to translate the object by the receiving party.

And the receiving party collects the current position of the user through a terminal GPS/wifi/3G, thereby determining the region and nationality of the target object. According to the position of each terminal, the image translation system sends a request, translates an object transmitted by a source object into a corresponding target object, and generates a derivative translation.

As shown in fig. 11, when a user who is in the home translates "Friso" in the group chat software (as shown in the second UI diagram in fig. 11), although in the same chat interface, the characters in hong kong china and Sophie in the netherlands in the group will receive different translation results suitable for the local. The chat interface of the character located in hong kong of china will display the picture and the derivative information of the object corresponding to the object in hong kong of china, and the chat interface of Sophie located in the netherlands will display the picture and the derivative information corresponding to the object in the netherlands.

Four, slogan derived translation adaptation

When the input object is a logo, the present invention proposes a method for automatically generating a derivative translation for the current logo regarding local laws and regulations based on the current location or region. The treatment process can be divided into the following steps:

the method comprises the following steps: obtaining source object information

This step is the same as step S1 of the second embodiment, and is not described here again.

Step two: displaying translated target object information

Further, the method for analyzing, identifying and generating the derivative information about the slogan is as follows:

according to step S2 of the second embodiment, an object is recognized, and if the object is a logo, a logo translation is performed and derivative information related to local laws and regulations is generated.

The method for judging whether the object is the logo comprises the following steps: and establishing a multilingual common identifier database, matching the input object with sentences in the identifier database after the input object performs character recognition, judging the current input object as an identifier if the input object is matched with the sentences in the identifier database, and otherwise, translating the input object according to a common object.

The method for generating the derivative information related to the local laws and regulations comprises the following steps: and acquiring the current region of the terminal through a GPS/WIFI/3G terminal and the like, and inquiring the rule items containing the translation contents from the rule database. The construction method of the regulation database comprises the following steps: the method is characterized in that a regulation policy database of different regions is established aiming at common identifiers, and related terms can be automatically extracted from texts on a law/regulation website or established in a manual labeling mode.

According to different cultural backgrounds of the regions where the slogans are located, the slogans translation relates to the following three conditions:

1) the slogan or the signboard is in the A language environment, and the slogan or the signboard is not in the B language environment;

if the signboard of the source end does not have a corresponding sign at the target language end, only the characters in the signboard of the source end are translated, and the derivative translation is given, for example, if the "foreright" signboard in fig. 18(a) is available in europe, and if the "foreright" signboard is not available in china, the "foreright" signboard is translated into chinese, and the local meaning and the derivative information of the signboard are output at the terminal target end.

2) A, B the patterns are different and the meaning is the same;

and extracting picture features and text features according to the method in the second step for translation, and displaying corresponding signboard (not necessarily identical) and derivative translation of the target end by the terminal.

3) A, B the patterns are the same and the meanings are different;

and in the situation, according to the method of 2), outputting the corresponding signboard and derivative information obtained under the combined action of the picture characteristic and the text characteristic with the signboard A, and if no corresponding signboard exists, outputting the meaning and derivative information of the signboard according to 1).

As shown in fig. 12, the terminal detects the input object, the system recognizes that it is a slogan of "call forbidding", and detects the current position through the terminal, and according to the position information and slogan text information, the local penalty regulation "offender penalty 1000 yuan" for violating this behavior is searched from the regulation database, and then translated into the language of the terminal system (or the target language set by the user) and displayed on the terminal screen.

Further, if the slogan relates to the operation of a specific APP, then the association is performed with the APP, as in fig. 12, "prohibit" and "make a call" relate to the operation of the APP of the terminal voice communication, the terminal will automatically associate the APP, and pop up the warning message "Turn off call APP".

Fifth, personalized event reminders based on derivative translations

The fifth embodiment of the invention also provides a personalized event reminding scheme, which can set reminding for a user and further can automatically adjust.

For example, when translating an object such as a medicine, if the translation system detects that the derived translation contains information such as time interval, number and the like, a personalized reminding event is set for the user; further, it is detected that the user has set a reminder, and the reminder can be adaptively changed based on the comparison information in the derived information. The reminding method can be applied to intelligent terminals such as mobile phones, wearable devices or other intelligent devices.

The method comprises the following specific steps:

the method comprises the steps of obtaining source end object information

This step is based on step S1 of the second embodiment, and further, the detection information includes whether there is a set reminding event on the terminal device, and whether the content of the reminding event is related to the current translation content is determined according to the content of the reminding event.

The method for detecting whether the translation related prompt exists in the prompt of the current equipment comprises the following steps: judging according to the relevance between the translation content and the device reminding content, specifically, for example: and calculating the similarity of the text keywords according to the translation content and the equipment reminding content, wherein the calculation method can be cosine similarity, editing distance and the like. For example, in fig. 13(a), the event keyword "dose" is extracted from the derived translation, and if the current device detects that the existing reminding event subjects are "appointment", "medicine taking", and "birthday", respectively, and finds that the reminding event is most related to the "dose" event through semantic similarity calculation, the "medicine taking" reminding event is determined to be the reminding event related to the current translation content. In addition to calculating the similarity according to the keywords, whether the reminding content is related to the current existing reminding content can be determined through text semantic distributed representation or text topic similarity calculation.

Step two: displaying translated target object information

Based on step S2 in the second embodiment, the system performs automatic text analysis on the translated content, and if it is detected that the derived translation includes a time interval, automatically generates a reminding event according to the user information detected in the terminal or adaptively changes the existing reminding event. The content of the reminders herein includes automatically extracted from the derived translations, including but not limited to: name, dose, time interval, or other critical information about the object.

The specific method comprises the following steps: through the first step, if the terminal detects that no related reminder exists in the reminders of the current device, the terminal prompts the user whether to create a new reminder or not, different reminder object options are automatically generated according to the text analysis result, possible reminder options are displayed on the terminal, as shown in fig. 13(b), two reminder objects are automatically obtained through text analysis of the derivative information, and a reasonable time reminding suggestion is given according to the time interval of the text and the average work and rest time of the user. As shown in fig. 13(c), a relatively reasonable reminding time is automatically arranged according to the work and rest time of the user detected by the terminal and the time interval analyzed by the text. If the terminal detects that the related reminding exists in the reminding of the current equipment, the terminal prompts the user whether the reminding time needs to be changed or not, and automatically arranges the relatively reasonable reminding time according to the work and rest time of the user detected by the terminal and the time interval of text analysis.

The manner of detecting the work and rest time of the user by the terminal here includes but is not limited to: detecting the morning starting time and the night sleeping time of the terminal equipment, and calculating an average value; the user's daily activity time may also be detected by a device sensor, such as a temperature sensor on a watch, a gyroscope, etc., and an average value calculated, etc.

Sixthly, multi-mode output of the translation result

The fifth embodiment of the invention also provides a scheme for multi-modal output of the translation result, wherein the multi-modal output comprises character description, pictures, audio, video and the like; further, the output modality can be adaptively adjusted based on user operation, the environment of the terminal, the current network condition, the terminal device type and the like.

For example, in the normal mode, the text description and the picture of the target object are translated and output, and further, the text description and the picture may include voice, and if the derived translation of the target object includes a video, as shown in fig. 14(a), the video may be output. If the user is detected to set the child mode, as shown in fig. 14(b), a small amount of text plus pictures and videos are adopted in the translation result; if it is detected that the current environment is quiet, such as a library, a hospital, etc. shown in fig. 14(b), a text description plus a picture is adopted in the translation result, and further, audio or video can be provided, but a warning is given before the audio and video are played; if the current network/data signal is poor or the power is insufficient, as shown in fig. 14(b), the text description and the compressed picture are used in the translation result.

Because different users vary widely in language, culture, and cognition, the geographic location of the same user when using machine translation on a mobile device or wearable device also changes from time to time. In the fifth embodiment of the invention, the position information of translation occurrence and the personal information of the user are considered when translating the object, so that the translation result and the derived translation are closer to the actual requirements of the user.

EXAMPLE six

In consideration of the above, the existing machine translation technology generally focuses on how to apply to mobile devices such as mobile phones and pads, and does not fully discuss how machine translation is adapted to watches and glasses in multiple users and multiple devices such as wearable devices, so as to bring greater convenience to people. With the increasing power of wearable devices to process data and the increasing popularity of devices, how to transplant machine translation to wearable devices, how to adapt to the characteristics of multiple users, and how to mine data generated by multiple users to improve translation quality become very important issues.

Further, in view of the increasing popularity of wearable smart devices and the portability thereof, the sixth embodiment of the present invention provides a solution for performing object translation on multiple smart devices, and provides a solution for performing translation for multiple users and improving translation quality through large user data.

Object information translation applied to intelligent watch

The watch is taken as intelligent equipment which is carried by a user at any time, and has the characteristic of more convenience in carrying compared with terminals such as a mobile phone, so that the sixth embodiment of the invention provides an object translation scheme which can be applied to the intelligent watch, and further provides a translation scheme separated from the terminals such as the mobile phone.

The method comprises the following steps: obtaining source object information

The detection uploading and synchronization of the source end object are completed through communication of the watch, other terminal equipment (such as a mobile phone/PAD) and a server, wherein the interaction modes of the watch, the other terminal equipment and the server include but are not limited to 1) the smart watch only serves as input and/or output equipment, a camera of the smart watch detects or other terminal equipment detects the source end object, the source end object is transmitted to the terminal equipment or uploaded to the server for translation, the translation result is transmitted back to the terminal equipment and/or the watch, further, the terminal equipment obtaining the translation result carries out self-adaptive adjustment on the translation result according to the size of a watch dial plate, the translation result is transmitted to the watch for display, and as shown in figures 15(a) and 15(b), the display mode on the watch is introduced in the second step; 2) the intelligent watch serves as intelligent terminal equipment, and after the watch detects a source object, the source object is transmitted to an object translation system in the watch or a source object picture is uploaded to a server for translation. Further, the translation results may be adaptively adjusted by the server back to the watch.

Step two: displaying translated target object information

Further, displaying the partially derived translation may also be included.

When the translation result is returned to the watch, adaptive adjustment is performed, including but not limited to: the size of the object picture is self-adaptive, the descriptive characters are automatically abstracted, the character display is reduced as much as possible, and the derivative translation does not provide video; or based on the user's operation, for example, detecting whether the user selects the full text or the abstract, detecting whether the user starts the audio playing setting, detecting whether the user selects the video playing setting, and the like. Further, if the text of the derived translation contains keywords such as time intervals, analyzing the text, and automatically generating a reminding event, wherein the reminding time is arranged according to the work and rest time of the user: the average waking time and resting time of the user are automatically detected by the watch, and then a reminder event is established according to the time interval in the text. As shown in fig. 16, the system detects time intervals in the derived translation and finds several possible time intervals through text analysis, and automatically generates different options to display on the dial. If the adult selected by the user is detected, reminding time is automatically arranged according to the average work and rest time of the user and the time interval of the adult. The method is the same as the second step in the personalized event reminding scheme based on the derivative translation, and is not repeated here.

Second, be applied to object information translation of intelligent glasses

For the translation system, the intelligent glasses are used as wearable equipment with the highest human eye movement synchronization rate, and compared with other terminals, the intelligent glasses have the huge advantages of capturing source end objects in real time and being completely separated from double-hand operation. Therefore, the sixth embodiment of the present invention provides an object translation scheme applied to smart glasses.

The processing process of the object information translation on the intelligent glasses is divided into two steps:

the method comprises the following steps: obtaining source object information

As shown in fig. 17(a), if the smart glasses are used as the display device, data transmission may be performed by a communication method between the watch and the terminal device, such as a PAD of a mobile phone, so as to receive the source object, the target object, and the derivative translation, which is the same as the communication method described in the first step in the object information translation scheme applied to the smart watch and is not repeated here. If the smart glasses are used as the smart terminal, then the second method is implemented in steps S1 and S2 or steps S3 and S4, and further provides an input method of eye tracking and line of sight focusing. Capturing information such as source object pictures and the like by tracking eyeballs of a user, or the stay time of sight lines on an object or the position of sight line focus and uploading the information to a server for translation, and then generating a derivative translation according to the steps of the derivative translation in the single-object scheme.

Step two: displaying translated target object information

As shown in fig. 17(b), the target object is displayed on the glasses, and further, a derivative translation may be included. The display mode comprises the following steps: and displaying the target end object and the derived translation on the glasses in an augmented reality mode, for example, displaying the translation result in a semitransparent superposition mode. In order to reduce the interference to the sight of the user, the display mode of the translation result on the smart glasses is simpler than that of a terminal such as a mobile phone screen, and the display mode includes but is not limited to one of the following modes: the most critical information can be displayed, for example, only the meaning of the signboard is displayed when the signboard is translated, no additional derivative translation is displayed, and the translated text can be broadcasted in a voice mode; as shown in fig. 18(a) and 18(b), the smart glasses detect the signboard, translate it into the user language, display it on the glasses, and broadcast it in real time.

Third, feedback mechanism of user log data

The sixth embodiment of the present invention further provides a scheme for collecting log data used by a large user as feedback to improve object translation quality, including the following three steps:

the method comprises the following steps: collecting and analyzing user logs, and further, collecting location information of source-end objects

Collecting the user's behavior includes: the number of times that the user clicks the target object to enter the checking, the number of times that the user clicks the target object, and the like. As shown in fig. 19, after the terminal detects that the user clicks a certain target object, the number of clicks of the target object is increased by 1, and after the terminal detects that the user clicks praise, the number of praise of the target object is increased by 1.

The position method for collecting the source object comprises the following steps: when the terminal starts the translation of the object information, the source end object and the current position of the terminal are recorded, whether the source end object is a source of the source end object or not is automatically judged according to the position characteristics, and if the source end object is a source of the source end object, the source end object is added into the position information in the knowledge graph of the object.

Step two: displaying updated translation results

And updating the model parameters through a feedback mechanism, for example, giving higher error weight to a target end object with low approval rate or click rate, and retraining the alignment parameters in the alignment model. And updating the translation result according to the retrained alignment model.

Four-way crowdsourcing translation feedback mechanism

The sixth embodiment of the present invention further provides a scheme for optimizing a translation result through crowdsourcing translation, and a processing flow includes the following steps.

The method comprises the following steps: distributing translation tasks and collecting translation results

When the terminal sends an object translation request, in addition to executing the translation process described in embodiment two, the translation request is pushed to other users, and the users who respond to the translation request are invited to answer, as shown in fig. 20, and the users who answer the translation request are called answering users. The mode of answering the translation request responded by the user comprises the following steps: the method can select a proper answer from the translation result provided by the invention, and if all the provided candidates are not satisfied, the answer can also be submitted by inputting pictures, character descriptions, audios and other modes of the object, as shown in fig. 21, after answering the object information input by the user in a multi-mode, the system searches for a matched object in the knowledge graph after character recognition, text regularization and voice recognition.

Step two: establishing a user translation model, and updating a translation result according to the user translation model

The method for establishing the user translation model is specifically described as follows: after collecting the answers of the answering users, the answering users are classified according to the personal data of the answering users. Analyzing the answers of each type of users, counting the answer frequency, sorting, giving corresponding weight to each answer, and obtaining a user translation model from a source end object to a target end object with crowdsourcing translation feedback weight. As shown in the example of fig. 20, the answering users are divided into three categories according to the characteristics and answers of the users, and the answer situation of each category of users is analyzed and ranked for calculating the translation result of the next similar user.

The specific method for analyzing the user answers comprises the following steps: if the translation result provided by the user is detected to be matched with a certain object in the existing knowledge base, the combined occurrence frequency of the source end object and the target end object is added with 1, if the translation result provided by the user is detected not to be matched with all objects in the existing knowledge base, an object is newly established, and the combined occurrence frequency of the source end object and the target end object is initialized to 1.

When the translation system receives the object translation request again, the request user is analyzed and classified, the machine translation candidate and the translation candidate with the crowdsourcing translation feedback weight are considered simultaneously in the translation result, the comprehensive consideration score is calculated, and the final translation result is given.

The sixth embodiment of the invention provides a scheme for using object-to-object machine translation on a wearable device. Compared with the mobile device, the method changes the data processing mode and the data presentation mode, thereby adapting to the characteristics of the wearable device. In order to better adapt to the characteristics of multiple users and better utilize user data generated by multiple users, the invention provides an object translation method based on multiple users, establishes a feedback mechanism based on a large amount of user data and improves the translation quality.

EXAMPLE seven

Considering that the machine translation quality has high dependence on the normalization degree of the input text and the scale of the database and the model, the existing commercial machine translation technology generally needs the support of a network and a cloud service and text normalization processing. However, for many extreme conditions, such as limited network, insufficient power of the device, incomplete input text/voice/image information, etc., the current machine translation technology is not considered, which results in poor user experience.

Therefore, the seventh embodiment of the present invention provides an optimization scheme of the object information translation method under extreme conditions, including a scheme for solving incomplete input object information, a scheme for saving traffic and a battery, and an object information translation output scheme of the device under different conditions.

One, standardize incomplete input

In the real case, the following scenarios of incomplete input information are possible: the package of the article is incomplete, or the text or image of the package of the article is blurred, or the uploaded information, such as the information in the source picture, is too little, as shown in fig. 22. In order to ensure the accuracy of the object translation result, the invention provides a scheme for allowing multi-mode information input and information completion, which comprises the following steps:

the method comprises the following steps: multimodal collection of input information

The terminal device will collect at least one of the following information: picture information, text information, audio information, etc., which may be directly acquired by the device or input by the user, wherein the audio may be converted into text and then input into the object recognition system.

Step two: analyzing and identifying objects

This step is the same as the analysis and recognition process described in step S2 of the second embodiment, but if the input source object information is incomplete and the system cannot recognize the source object, step three is adopted.

Step three: initiating searches for object related information

If the source object information collected by the terminal equipment is not enough to identify the object, a network search engine is started or a database stored in the local machine in advance is used for searching and matching the pictures and the characters to obtain more pictures and characters related to the object, then the obtained picture and character information is subjected to filtering and standardization processing, and then the step two is executed, and if the information cannot be identified, the step two and the step three are repeated until the system identifies the object.

Specifically, for example, if the score of the existing source object information in the object identification classifier is lower than a set threshold, which is called that the source object information collected by the terminal device is not enough to identify an object, a network search engine is started to search and match pictures and characters to obtain more pictures and characters related to the object, then the obtained pictures and characters information is subjected to filtering and normalization processing, and then step two is executed, if the information cannot be identified, step two and step three are repeated until the system identifies, or the repetition number exceeds an iteration threshold. The set threshold and the iteration threshold here belong to empirical values. As shown in fig. 22, in the source image, the translation system extracts corresponding identification features from the image, classifies and identifies the features in the object identification system, starts a search engine if the identification score is lower than a set threshold, searches for the features of the image on the network, returns the search results of the previous set number to the translation system, re-extracts the features, and repeats the above processes until the identification score is higher than the set threshold, or the number of times of repeated search exceeds an iteration threshold. And finally, acquiring the object with the highest score in the last round of recognition as a recognition result.

Second, translation of dishes

When a user has a meal, the user can meet the conditions that foreign dishes and/or menus need to be translated, and the like, and at the moment, if only the translation of the name of the dish is given, the user requirements cannot be completely met, and the corresponding domestic dishes need to be further given, so that information such as ingredients, taste, eating methods and the like of the dish can be visually provided;

specifically, the dish (meal) translation discussed in the seventh embodiment relates to unpackaged dishes and/or menus, and because information such as ingredients, allergens, calories, applicable people and the like cannot be directly obtained, the dish translation generally belongs to the situation that input of source-end information is incomplete, the source-end dishes are firstly matched and identified in a knowledge graph according to the steps in a normative incomplete input scheme, then the translated target-end dishes are displayed based on the characteristics such as ingredient similarity, taste/smell/temperature and the like of the dishes, and further sorting can be performed based on the matching degree.

In addition, for dishes for which the terminal can directly know information such as names, ingredients, allergens, calories, applicable people, and the like, for example, packaged dishes (belonging to packaged foods) and dishes with detailed descriptions, the translation method refers to the schemes provided in the second to sixth embodiments. The complete translation process is illustrated here by the example of translating a locally famous diet dirty duck meal in indonesia.

For existing dishes such as cooked food, a source end acquires an object picture and uploads the picture to a server, then the source end object is detected to be a dirty duck in Indonesia through the second step and the third step, picture characteristics, user individual information and the like are acquired through the step S1 of the second embodiment and are translated into the dishes familiar to a user, as shown in fig. 23(a), a target language of the user is Chinese is acquired through a system, the picture characteristics and text characteristics (such as food materials: the whole duck, a baking mode: baking, frying and the like) of the dirty duck are extracted through the translation system of the step S2 of the second embodiment, and candidate translations of a plurality of Chinese dishes, such as baked ducks and fried ducks (the picture characteristics and the text characteristics are similar to the source end) are obtained under the action of a translation model.

Further, the terminal can also display derived translation information, and the related key information can be at least one of the following information: taste, odor, temperature, color, ingredients, allergens, cooking methods, and the like, wherein taste includes, but is not limited to, at least one of: sweet, salty, spicy, sour, bitter, and the like, and temperatures include, but are not limited to, at least one of: ice, cold, warm, hot, allergens include but are not limited to at least one of the following: peanut, soybean, seafood, etc.

Further, according to different individual requirements, the system gives individual translation results and derivative information through the steps of the individual object information translation method. For example, if the system detects that the user is allergic to a certain food material or does not like a certain taste, sensitive contents such as an allergen and the like in the derivative translation are highlighted according to a personalized object information translation scheme. For example, if it is detected that the user is allergic to peanuts, highlighting the peanuts in the ingredients of the derivative information to remind the user of the allergy, if it is detected that the user does not eat pork, highlighting the pork in the ingredients of the derivative information to remind the user of the allergy, and if it is detected that the user does not eat spicy, highlighting the relevant content of spicy in the column of the taste of the derivative information. The method for the system to obtain the sensitive content of whether the user is allergic or spicy is described in the personalized object information translation scheme.

Further, key information contained in the dirty duck and the roast duck, such as a comparison of taste, smell, temperature, color, ingredients, allergen, eating method, cooking method, and the like, may be provided based on user operation and the like, as shown in fig. 23(b), 23 (c). The difference of derivative information of the dirty duck and the roast duck is compared in the figure, such as taste, ingredients, color, food temperature, recommended eating method and the like. For example, in terms of "taste", the "dirty duck" is slightly spicy, the meat quality is very dry and not greasy, while the "roast duck" has fat and slightly oily meat quality; for the ingredients, the dirty duck adopts Indonesia spices, and the roast duck adopts common ingredients in China; the color, the food temperature and the eating method are different. The comparison display is helpful for the user to obtain useful information at a glance, and a convenient and fast comparison way is provided for the user.

Further, if the source end is a plain text input, for example, as shown in fig. 24(a), at this time, the input is a plurality of incomplete object descriptions, first, character recognition is performed on each item name, then, the item names are matched to the determined object in the knowledge graph through the second step and the third step, and the picture and the derivative information are output in the terminal source end capture area in the augmented reality manner described in the first embodiment, as shown in fig. 24(a) and fig. 24(b), and meanwhile, the target end items and the derivative information corresponding to the item names are generated through the solutions related to the derivative translation in the first to sixth embodiments, and are output to the target end display area of the terminal. Further, the solution of automatically comparing the source end object and the target end object of the multiple objects in the fourth embodiment may also provide a result of comparing the derived information of the source end dish and the target end dish, where an interactive operation of augmented reality may also be provided through the three pairs of translation results in the third embodiment, including but not limited to: after a certain dish name of the source end capturing area is selected, the target end display area only displays the corresponding translation result, or after a certain dish of the target end display area is selected, the corresponding dish name is automatically highlighted in the source end capturing area, and the like. As shown in fig. 24(a), the source inputs multiple names of dishes, and the source capture area displays the translation result of the source dish by using AR as described in the first embodiment. Meanwhile, in the target terminal display area, according to the scheme of the multi-object translation in the second embodiment, step S4 of the multi-object translation and the scheme of the multi-object derivative translation in the fourth embodiment, a translation result is given, and if it is detected that the user selects a certain translation result, as shown in fig. 24(b), it is detected that the user selects a "dirty duck", which includes but is not limited to long pressing, clicking a certain translation in the source terminal dish name translation area, long pressing, clicking a certain translation result in the target terminal display area, and the like, then detailed information of the corresponding source terminal dish, including pictures, derivative information, and the like, is automatically displayed in the source terminal capture area in an AR manner, referring to the detailed information of the corresponding source terminal dish of the "dirty duck" given in fig. 24(c), and further, the information of the corresponding target terminal dish is output in the target terminal display area according to the aforementioned method, referring to fig. 24 (d).

Further, for the situation that the target language country can not find the source dish which is matched with the source dish, the system gives the translation result which is closest to the target language country after the target language country integrates all the characteristics, and further gives the same point and different points. For example, translating the "Sichuan chafing dish" into the Japanese dish, because the Japanese dish does not have dishes corresponding to the Japanese dish, but after combining the characteristics of color, cooking method, eating method and the like, giving the birthday pan, beef chafing dish and the like which are the closest translation results, further, giving the same points and different points of the translation results and the source dishes in the comparison mode, wherein the same points comprise the following points: the cooking method is that food materials are put in a pot and added with soup to be cooked, and the different points comprise: taste, ingredients, etc.

Method for saving flow and battery

Further, a seventh embodiment of the present invention provides a solution for saving traffic and a battery, so as to handle a scenario that may occur, where the location of the terminal is not wifi or data traffic is limited, a user wishes to save data traffic and a battery, for example, wifi coverage may not be complete at a destination of a travel abroad, and international data traffic costs are high. The process includes the following steps. The following steps are performed before the basic steps of example seven of the present invention.

The method comprises the following steps: predicting user intent, pre-downloading in advance

The terminal predicts the user intention by detecting and analyzing at least one of information such as user schedule, hobbies, environment, motion state and the like.

Specifically, for example, the terminal obtains the schedule information of the user by analyzing the information recorded by the user calendar and/or the memo software; the terminal acquires the information related to the interests and hobbies of the user by analyzing the information recorded by the mailbox, the short message and/or the call record of the user; the terminal acquires the environmental information of a user through a camera, a microphone, a sound sensor, a light sensor and/or a humidity sensor; the terminal acquires information such as a motion state of the user through a speed sensor and the like. And predicting possible action tracks and possible translation targets of the user according to the information such as the user schedule, hobbies, the environment, the motion state and the like. The predictive model may be learned by supervised methods.

And if the terminal detects that the terminal is currently under the wifi condition, prompting the user to download the translation offline model and the translation database related to the destination and/or the translation target.

Further, the steps also include classification, compression and filtering steps of the model and the database:

and the database is classified into food, medicine, cosmetics, sports goods, signboard and the like according to different translation scenes.

Filtering the translation model by methods such as entropy method/significance test, filtering a knowledge graph database by redundant information detection and pruning, compressing an audio and video database by encoding and decoding systems such as AMR-EVS and H.265, and compressing the graph database by JPEG format.

Therefore, if the terminal detects that the terminal is in the wifi condition at present, the user is prompted to download the translation offline model and the database which are related to the destination and the translation target and are subjected to the steps of refining, filtering and compressing.

As shown in fig. 25, all databases are classified, filtered and compressed according to categories, an index is established, and when the terminal detects user information, the server classifies user intentions and prompts the user to download the corresponding database under wifi condition.

Step two: the database is used offline, and further, the simplified database can be used offline.

When the terminal device detects that the network used at present is data traffic, searching and translating are preferably considered according to the downloaded model and the database, communication between the terminal and the cloud service is reduced, meanwhile, because the scale of the database is greatly reduced, the search space in the translation process is reduced, a large amount of operation is avoided, and therefore electric quantity is saved.

Output strategy under four and different states

When the device is in different network environments and different power conditions, a seventh embodiment of the present invention proposes an optimal adaptive strategy, which includes a system operation platform, a scale of a usage model, and a form of multi-modal output, for example, as shown in table 1 below.

If the terminal equipment is in wifi connection and the electric quantity is sufficient, the running platform of the translation system is a remote server, the scale of the used model is the whole database, and the presentation mode of the translation result can be text, image, audio and/or video; if the terminal equipment is in no wifi connection and the electric quantity is sufficient, the running platform of the translation system is the terminal equipment, the scale of the used model is a filtered database, and the presentation mode of the translation result can only be a text image and/or audio; if the terminal equipment is in wifi connection but the electric quantity is insufficient, the running platform of the translation system is a remote server, the whole database is used, and only texts and/or images are output in translation results; if the terminal equipment is in the state without wifi and the electric quantity is low, the translation system

And running on the terminal equipment, and only outputting the text and/or the image by using the filtered model.

TABLE 1 optimal adaptive strategy

The seventh embodiment of the invention provides an optimization scheme aiming at various conditions, which comprises a filtering and compressing scheme of a model and a database, a pre-judging and pre-downloading scheme, an approximate matching scheme of incomplete input information and a translation result presentation scheme under low power, thereby improving the robustness of a translation system.

Example eight

Based on the object information translation method provided by the first embodiment, an eighth embodiment of the present invention provides an object information translation apparatus, as shown in fig. 26, the apparatus including: an object information translation unit 201 and an information output unit 202.

The object information translating unit 201 is configured to translate, based on the obtained source object information, target object information corresponding to the source object. The source object translation device is used for identifying one or more source objects to be translated based on the obtained source object information.

The information output unit 202 is configured to output the target object information translated by the object information translation unit 201.

In the solution of the present invention, the specific functions of each unit in the object information translation apparatus provided in the eighth embodiment may refer to the specific steps of the object information translation method provided in the first embodiment, and are not described in detail here.

Example nine

An embodiment of the present invention provides a method for acquiring derived information, as shown in fig. 27, a specific process includes:

s301: based on the obtained object information, derived information associated with the object is determined.

Specifically, the corresponding object may be identified based on the acquired object information. Wherein the acquired object information may include at least one of the following modalities: text, pictures, audio, video, etc.

In practical application, the object information can be directly captured and acquired by the terminal, such as shooting and acquiring through a camera. Alternatively, the transmitted object information may be acquired from a network or other device. The acquired object information may contain at least one of: multimedia information corresponding to the object, text information identified from the multimedia information corresponding to the object, position information corresponding to the object, searched object related information, and object related information input by the user. For example, the object information that can be acquired may be a picture that is acquired in real time, and the derived information corresponding to the picture is generated in real time by the scheme provided by the ninth embodiment of the present invention.

For how to identify the corresponding object based on the obtained object information, reference may be made to embodiments one to two of the present invention, which are not described herein again.

After the object is identified, attribute information corresponding to the preset attribute of the identified object can be searched from a preset object knowledge map database; and confirming the searched attribute information as the derivative information related to the object. Wherein the preset attribute is determined according to an object class of the object.

Alternatively, the derived information related to the object may be determined based on the position information corresponding to the object.

In practical application, the number of the identified objects is one or more. When the identified objects are a plurality of objects and the plurality of objects belong to a combined object of the same category, the derived information related to the objects can be determined by at least one of the following items:

determining derivative information associated with the combined object aiming at the combined object corresponding to the plurality of objects;

for a plurality of objects, derived information associated with each object is acquired.

S302: and outputting the determined derivative information.

Preferably, in the ninth embodiment of the present invention, the language environment corresponding to the obtained derived information may be determined according to the personal information of the user; the derived information is displayed based on the determined language context.

Preferably, the derived information needing to be highlighted can be positioned according to the personal information of the user; the located derivative information is highlighted.

Preferably, the related reminding event can be generated or changed according to the personal information of the user and/or the acquired derivative information.

As for the determination scheme of the derived information of the object in the ninth embodiment, reference may be made to the determination schemes of the derived information in the first embodiment, the fourth embodiment, and the fifth embodiment, which are not described herein again.

Based on the information obtaining method, a ninth embodiment of the present invention further provides a derivative information obtaining apparatus, as shown in fig. 28, including: a derivative information acquisition unit 401 and an information output unit 402.

The derived information acquiring unit 401 is configured to determine derived information associated with the object based on the acquired object information.

The information output unit 402 is configured to output the derivative information determined by the derivative information acquisition unit 401.

In the solution of the present invention, the specific functions of each unit in the derived information acquiring apparatus provided in the ninth embodiment may refer to the object information translating method provided in the first embodiment and the specific steps of the information acquiring method provided in the ninth embodiment, and are not described in detail herein.

The technical scheme provided by the invention can translate the object, not only the characters, so that the requirement that the translation task cannot be completely covered by the translation of the text can be effectively prevented. By means of the pre-constructed object alignment model, foreign objects which are unfamiliar to the user can be translated into corresponding domestic objects which are familiar to the user; or the translation direction is exchanged, and the domestic object is translated into the corresponding object of the target country. Therefore, compared with the existing translation object which only has text, the scheme provided by the invention can meet the translation requirement of the user on the object, expand the range of the machine translation object and enhance the applicability of translation.

Further, the scheme of the invention provides a concept of derivative translation on the basis of translation from object to object. Compared with the existing passive translation, the scheme of the invention not only can provide the transliteration information of the source end object, but also can analyze the target end object corresponding to the source end object and provide the derivative information of the target end object, thereby supplementing the translation content and enhancing the understanding of the user to the translation content.

Furthermore, the scheme of the invention can adjust the translation content according to the information such as the position information, the user information, the type of the terminal equipment and the like of the translation, so as to meet different actual translation requirements of the user.

Further, the invention provides a method for using object-to-object machine translation on a wearable device. Compared with the mobile device, the method changes the data processing mode and the data presentation mode, thereby adapting to the characteristics of the wearable device.

Furthermore, in order to better adapt to the characteristics of multiple users and better utilize user data generated by multiple users, the invention provides an object translation method based on multiple users, and establishes a feedback mechanism based on a large amount of user data for improving the translation quality.

Furthermore, the invention also provides an optimization scheme aiming at various conditions, which comprises a filtering and compressing scheme of a model and a database, a pre-judging and pre-downloading scheme, an approximate matching scheme of incomplete input information and a translation result presentation scheme under low power so as to improve the robustness of the translation system.

Those skilled in the art will appreciate that the present invention includes apparatus directed to performing one or more of the operations described in the present application. These devices may be specially designed and manufactured for the required purposes, or they may comprise known devices in general-purpose computers. These devices have stored therein computer programs that are selectively activated or reconfigured. Such a computer program may be stored in a device (e.g., computer) readable medium, including, but not limited to, any type of disk including floppy disks, hard disks, optical disks, CD-ROMs, and magnetic-optical disks, ROMs (Read-Only memories), RAMs (Random Access memories), EPROMs (Erasable Programmable Read-Only memories), EEPROMs (Electrically Erasable Programmable Read-Only memories), flash memories, magnetic cards, or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a bus. That is, a readable medium includes any medium that stores or transmits information in a form readable by a device (e.g., a computer).

It will be understood by those within the art that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions. Those skilled in the art will appreciate that the computer program instructions may be implemented by a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, implement the features specified in the block or blocks of the block diagrams and/or flowchart illustrations of the present disclosure.

Those of skill in the art will appreciate that various operations, methods, steps in the processes, acts, or solutions discussed in the present application may be alternated, modified, combined, or deleted. Further, various operations, methods, steps in the flows, which have been discussed in the present application, may be interchanged, modified, rearranged, decomposed, combined, or eliminated. Further, steps, measures, schemes in the various operations, methods, procedures disclosed in the prior art and the present invention can also be alternated, changed, rearranged, decomposed, combined, or deleted.

The foregoing is only a preferred embodiment of the present invention, and it should be noted that those skilled in the art can make various improvements and modifications without departing from the principle of the present invention, and these improvements and modifications should also be construed as the protection scope of the present invention.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:资源展示方法、装置、计算机设备及介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!