Method for training language model and label setting method

文档序号:8276 发布日期:2021-09-17 浏览:60次 中文

1. A method of training a language model, comprising:

acquiring at least one standard word and a spoken word having the same meaning as the at least one standard word as sample words; and

training a language model using the sample words and the sentences containing the sample words.

2. A label setting method, comprising:

recognizing the call record file by using a language model to obtain a target text;

determining at least one target word in the target text;

converting the at least one target word into at least one standard word; and

responding to the selection operation of a user for a target standard word in the at least one standard word, and setting a label for the call record file according to the target standard word;

wherein the language model is trained according to the method of claim 1.

3. The method of claim 2, wherein the determining at least one target word in the target text comprises:

and determining a standard word and/or a spoken word contained in the target text as the target word according to a standard word and spoken word set, wherein the standard word and spoken word set comprises at least one standard word and at least one spoken word.

4. The method according to claim 3, wherein the determining the standard words and/or spoken words contained in the target text according to the set of standard words and spoken words comprises:

and respectively utilizing each word in the standard word and the spoken word set to perform word matching on the target text so as to determine the standard words and/or the spoken words in the target text.

5. The method of claim 3, wherein the converting the at least one target word into at least one standard word comprises:

for each target word in the at least one target word, under the condition that the target word is a spoken word, converting the target word into a corresponding standard word according to a word correspondence relationship, wherein the word correspondence relationship is used for representing a correspondence relationship between standard words and spoken words having the same meaning.

6. The method of claim 2, further comprising:

and displaying the at least one standard word to the user.

7. The method of claim 2, further comprising:

creating an audio and video call; and

and recording the audio and video call to obtain the call record file.

8. An apparatus for training a language model, comprising:

the acquisition module is used for acquiring at least one standard word and a spoken word with the same meaning as the at least one standard word as a sample word; and

and the training module is used for training a language model by using the sample words and the sentences containing the sample words.

9. A label setting device comprising:

the recognition module is used for recognizing the call record file by using the language model to obtain a target text;

the determining module is used for determining at least one target word in the target text;

the conversion module is used for converting the at least one target word into at least one standard word; and

the setting module is used for responding to the selection operation of a user aiming at a target standard word in the at least one standard word and setting a label for the call record file according to the target standard word;

wherein the language model is trained according to the method of claim 1.

10. The apparatus of claim 9, wherein the means for determining comprises:

and the determining submodule is used for determining the standard words and/or the spoken words contained in the target text as the target words according to the standard word and spoken word set, wherein the standard word and spoken word set comprises at least one standard word and at least one spoken word.

11. The apparatus of claim 10, wherein the determination submodule is specifically configured to:

and respectively utilizing each word in the standard word and the spoken word set to perform word matching on the target text so as to determine the standard words and/or the spoken words in the target text.

12. The apparatus of claim 10, wherein the conversion module comprises:

and the conversion sub-module is used for converting the target words into corresponding standard words according to word correspondence relation under the condition that the target words are spoken words aiming at each target word in the at least one target word, wherein the word correspondence relation is used for expressing the correspondence relation between standard words with the same meaning and spoken words.

13. The apparatus of claim 9, further comprising:

and the display module is used for displaying the at least one standard word to the user.

14. The apparatus of claim 9, further comprising:

the creating module is used for creating audio and video calls; and

and the recording module is used for recording the audio and video call to obtain the call record file.

15. An electronic device, comprising:

at least one processor; and

a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,

the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.

16. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-7.

17. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-7.

Background

The label is a keyword with strong relevance to the content, and can help a human or a computer to describe and classify the content so as to facilitate retrieval.

Disclosure of Invention

The present disclosure provides a method of training a language model, a label setting method, an apparatus, a device, a storage medium, and a program product.

According to an aspect of the present disclosure, there is provided a method of training a language model, including: acquiring at least one standard word and a spoken word having the same meaning as the at least one standard word as sample words; and training a language model using the sample words and the sentences containing the sample words.

According to another aspect of the present disclosure, there is provided a tag setting method including: recognizing the call record file by using a language model to obtain a target text; determining at least one target word in the target text; converting the at least one target word into at least one standard word; responding to the selection operation of a user for a target standard word in the at least one standard word, and setting a label for the call record file according to the target standard word; wherein the language model is trained according to the method of the embodiments of the present disclosure.

According to another aspect of the present disclosure, there is provided an apparatus for training a language model, including: the acquisition module is used for acquiring at least one standard word and a spoken word with the same meaning as the at least one standard word as a sample word; and the training module is used for training a language model by utilizing the sample words and the sentences containing the sample words.

According to another aspect of the present disclosure, there is provided a label setting apparatus including: the recognition module is used for recognizing the call record file by using the language model to obtain a target text; the determining module is used for determining at least one target word in the target text; the conversion module is used for converting the at least one target word into at least one standard word; the setting module is used for responding to the selection operation of a user aiming at a target standard word in the at least one standard word and setting a label for the call record file according to the target standard word; wherein the language model is trained according to the method of the embodiments of the present disclosure.

Another aspect of the present disclosure provides an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the embodiments of the present disclosure.

According to another aspect of the disclosed embodiments, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method shown in the disclosed embodiments.

According to another aspect of the embodiments of the present disclosure, there is provided a computer program product, a computer program, which when executed by a processor implements the method shown in the embodiments of the present disclosure.

It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.

Drawings

The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:

FIG. 1 is a schematic diagram of an application scenario of a method for training a language model, a label setting method and an apparatus according to an embodiment of the present disclosure;

FIG. 2 schematically illustrates a flow diagram of a method of training a language model according to an embodiment of the present disclosure;

FIG. 3 schematically illustrates a flow chart of a label setting method according to an embodiment of the present disclosure;

FIG. 4 schematically illustrates a flow chart of a label setting method according to another embodiment of the present disclosure;

FIG. 5 schematically illustrates a flow chart of a label setting method according to another embodiment of the present disclosure;

FIG. 6 schematically illustrates a block diagram of an apparatus for training a language model according to an embodiment of the present disclosure;

FIG. 7 schematically illustrates a block diagram of a label setting apparatus according to an embodiment of the present disclosure; and

FIG. 8 shows a schematic block diagram of an example electronic device that may be used to implement embodiments of the present disclosure.

Detailed Description

Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.

The method for training the language model, the label setting method and the device application scenario provided by the present disclosure will be described below with reference to fig. 1.

Fig. 1 is a schematic view of an application scenario of a method for training a language model, a label setting method and an apparatus according to an embodiment of the present disclosure.

As shown in fig. 1, the application scenario 100 includes a seeking facilitator 110 and a providing facilitator 120. Taking an industrial scenario as an example, the party seeking assistance 110 may be a worker and the party providing assistance 120 may be a technical expert.

When the assisting party 110 encounters an unsolvable problem in the work site, an audio/video call may be established with the providing assisting party 120 through a program product such as an AR (Augmented Reality) remote assistance tool, and the communication may be performed with the providing assisting party 120 that is not in the work site in the form of an audio/video call. The AR remote assistance tool is based on the technologies of AR and RTC (Real-Time Communication), and can be applied to tool program products in after-sales service scenarios. In the process of audio/video call, the assisting party 120 may provide the assisting party 110 with auxiliary information through an AR tagging function provided by an AR (Augmented Reality) remote assisting tool, so as to assist the assisting party 110 in solving the problem.

In this embodiment, the content of the audio/video call may be recorded, and a call record file 130 for recording the call process is obtained. The call log file 130 has value as a case because the call log file 130 contains information about the problem description, how to solve the problem, etc. Based on this, the call log file 130 may be stored 140 in a background knowledge base. In storing the call log file 130, a tag needs to be set to the call log file 130 for easy indexing.

For example, in this embodiment, the knowledge base 140 may be disposed in a Server in the background, where the Server may be a cloud Server, which is also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of a traditional physical host and a VPS service (Virtual Private Server, or VPS for short) that the management difficulty is large and the service extensibility is weak. The server may also be a server of a distributed system, or a server incorporating a blockchain.

The method of training the language model will be described in detail below with reference to fig. 2.

It should be noted that the language model in this embodiment is not a language model for a specific user, and cannot reflect personal information of a specific user.

FIG. 2 schematically shows a flow diagram of a method of training a language model according to an embodiment of the present disclosure.

As shown in FIG. 2, the method 200 of training a language model may include operations S210-S220.

In operation S210, at least one standard word and a spoken word having the same meaning as the at least one standard word are acquired as sample words.

In operation S220, a language model is trained using the sample words and the sentences containing the sample words.

According to an embodiment of the present disclosure, a language model may be used to identify semantic information contained in an audio-video file. For example, in this embodiment, a sentence including a sample word may be input into a language model, semantic information corresponding to the sentence is obtained, then a recognition effect is determined according to whether the semantic information includes the sample word, parameters of the language model are adjusted according to the recognition effect, and training is continued until the recognition effect of the language model reaches an expected target. The standard words may be words meeting the language specification. The spoken word may be a spoken word used to express the same meaning as the standard word. Illustratively, in the present embodiment, each standard word may correspond to zero, one, or more spoken words.

The standard words, the sentences containing the standard words, the spoken words and the sentences containing the spoken words in this embodiment may be from a public data set, or the standard words, the sentences containing the standard words, the spoken words and the sentences containing the spoken words may be obtained by authorization of the corresponding user.

For example, in an industrial equipment maintenance scenario, the standard words may include, for example, an equipment name, a part name corresponding to the equipment name, and a fault type corresponding to the part name.

In a practical application scenario, the words that the language model needs to recognize are usually related to a specific domain. For example, in an industrial equipment maintenance scenario, the words to be identified may include words highly related to industry or enterprise, such as an equipment name, a part name corresponding to the equipment name, and a fault type corresponding to the part name. Conventional language models fail to efficiently recognize words in these particular areas.

In addition, in the actual communication process, the two parties participating in the communication often use a spoken expression mode during communication, but do not strictly use an expression mode conforming to the language specification. For example, when a meaning is intended to be expressed, a standard word for expressing the meaning is not used, but a spoken word that is more short than the standard word is used instead in conjunction with context information. For example, the meaning of "swing motor" will be referred to simply as "swing" or "motor" and the like.

Based on this, according to the embodiments of the present disclosure, the standard words related to the field may be collected according to the field of actual application, and the standard words are associated to determine the association relationship between the standard words. For example, the standard words may be device names, part names, failure types, etc., and it may be determined that one device is associated with a group of parts and one part is associated with a group of failure types. Further, spoken descriptions for each standard word may be collected, resulting in spoken words corresponding to the standard words.

And then, training the language model by using the collected standard words and the collected spoken words, so that the accuracy of recognizing the spoken words can be emphasized when the language model is used for recognition.

It should be noted that the language model obtained in this step includes semantic information included in the audio/video file, but the construction of the language model is performed after being authorized by the user, and the construction process conforms to relevant laws and regulations.

The label setting method will be described in detail below with reference to fig. 3.

Fig. 3 schematically shows a flow chart of a label setting method according to an embodiment of the present disclosure.

As shown in fig. 3, the tag setting method 300 may include operations S310 to S340.

In operation S310, the call log file is recognized using the language model, and a target text is obtained.

According to an embodiment of the present disclosure, the language model may be trained, for example, by the method of training a language model shown above. The call log file may be used to record the course of an audio video call, including an audio call or a video call. The target text may be used to represent semantic information contained in the call record file.

In this embodiment, the executing entity of the tag setting method may obtain the call record file in various public and legal compliance manners, for example, the call record file may be obtained from a public data set or obtained from the user after authorization of the user.

Then, in operation S320, at least one target word in the target text is determined.

According to the embodiment of the present disclosure, standard words related to the field of practical application may be collected in advance, spoken words having the same meaning as the standard words may be collected, and the collected standard words and corresponding spoken words may be used as a set of standard words and spoken words, and it may be understood that the set of standard words and spoken words includes at least one standard word and at least one spoken word. Then, the target words can be determined from the set of the standard words and the spoken words, wherein the target words are the standard words and/or the spoken words contained in the target text.

According to the embodiment of the disclosure, word matching can be performed on the target text by using each word in the standard word and the spoken word set respectively to determine the standard word and/or the spoken word in the target text.

For example, the target text is "motor failure", and the set of the standard word and the spoken word includes the standard word "rotation motor" and the corresponding spoken word "motor", so that when the target text is word-matched, the word "motor" in the target text may be matched as the target word.

In operation S330, the at least one target word is converted into at least one standard word.

According to the embodiment of the disclosure, for each target word in at least one target word, in the case that the target word is a spoken word, the target word is converted into a corresponding standard word according to the word correspondence. Wherein the word correspondence is used to indicate correspondence between standard words and spoken words having the same meaning. It is to be understood that, in the case where the target word is already a standard word, the above conversion processing is not performed.

In operation S340, in response to a selection operation of the user for a target standard word in the at least one standard word, a tag is set for the call record file according to the target standard word.

According to an embodiment of the present disclosure, a user may be, for example, a call participant to which a call log file corresponds. According to other embodiments of the present disclosure, the user may be other than the call participant, which is not specifically limited by the present disclosure.

According to the embodiment of the disclosure, for example, a target standard word selected by a user can be set as a tag of a call record file. Or configuring a corresponding label for each standard word in advance, and setting a label corresponding to the target standard word for the call record file.

According to the embodiment of the disclosure, the speech recognition is performed on the call record file by using the language model, the standard words corresponding to the call record file are determined according to the recognized text for the user to select, and then the label is set for the video according to the standard words selected by the user, so that the efficiency of setting the label and the quality of the label can be improved.

According to other embodiments of the present disclosure, after determining the standard word corresponding to the call record file, at least one standard word may be presented to the user to facilitate the user to select the at least one standard word.

According to other embodiments of the present disclosure, after the tag is set for the call log file, the call log file and the corresponding tag may be stored for subsequent use.

Fig. 4 schematically shows a flow chart of a label setting method according to another embodiment of the present disclosure.

As shown in fig. 4, the tag setting method 400 may include operations S410 to S460.

In operation S410, an audio and video call is created.

According to the embodiment of the present disclosure, the audio/video call may have two participants or may have a plurality of participants, which is not specifically limited by the present disclosure.

In operation S420, an audio/video call is recorded to obtain a call record file.

According to the embodiment of the disclosure, recording can be performed in the audio and video call process, and a call record file is generated. The call record file may include all the contents of the audio/video call, or may include only a part of the contents thereof, which is not specifically limited in this disclosure.

Then, the following operations S430 to S460 are performed with respect to the obtained call record file. It should be noted that operations S430 to S460 may be executed after the audio/video call is ended, or may be executed during the audio/video call.

In operation S430, the call log file is recognized using the language model, and a target text is obtained.

In operation S440, at least one target word in the target text is determined.

In operation S450, the at least one target word is converted into at least one standard word.

In operation S460, in response to a selection operation of the user for a target standard word of the at least one standard word, a tag is set for the call log file according to the target standard word.

For example, reference may be made to the above in operations S430 to S460 according to an embodiment of the present disclosure, which is not described herein again.

According to the embodiment of the disclosure, the standard words are automatically generated for the audio and video call for the user to quickly click, so that the label is set for the call record file of the audio and video call, and the efficiency of setting the label and the quality of the label are improved.

The label setting method is further described with reference to fig. 5 in conjunction with specific embodiments. Those skilled in the art will appreciate that the following example embodiments are only for the understanding of the present disclosure, and the present disclosure is not limited thereto.

Fig. 5 schematically shows a flow chart of a label setting method according to another embodiment of the present disclosure.

As shown in fig. 5, the tag setting method 500 includes, in operation S510, initiating an audio/video call request by a first terminal corresponding to a helper to a second terminal corresponding to a helper to seek AR remote assistance of the helper.

In operation S520, the first terminal corresponding to the assisting party and the second terminal corresponding to the providing assisting party establish an audio/video call, so that the providing assisting party provides the AR remote assistance to the providing assisting party.

In operation S530, an audio/video call process between the first terminal and the second terminal is recorded, so as to obtain a call record file.

In operation S540, the call log file is input into the language model to obtain a target text.

In operation S550, a standard word and/or a spoken word included in the target text is determined according to matching the target text with words in the set of standard words and spoken words.

Illustratively, in the present embodiment, the target text includes both the standard word and the spoken word.

In operation S560, for a spoken word in the target text, the spoken word is converted into a corresponding standard word. And for the standard words in the target text, no conversion processing is performed.

In operation S570, the converted standard words are presented to the providing-assistance party through the second terminal.

The providing assistor may select the standard word presented in the second terminal in operation S580. After the selection is made by the providing assistant, the standard word selected by the providing assistant is set as the tag of the call record file in response to the selection operation of the providing assistant.

Then, in operation S590, the call log file and the corresponding tag are stored in the knowledge base.

According to the label setting method disclosed by the embodiment of the disclosure, the intelligent level of AR remote assistance can be improved, and the value and the utilization efficiency of the call record file are improved.

FIG. 6 schematically shows a block diagram of an apparatus for training a language model according to an embodiment of the present disclosure.

As shown in fig. 6, the apparatus 600 for training a language model may include an obtaining module 610 and a training module 620.

The obtaining module 610 may be configured to obtain at least one standard word and a spoken word having the same meaning as the at least one standard word as a sample word.

The training module 620 may be configured to train the language model using the sample words and the sentences containing the sample words.

Fig. 7 schematically illustrates a block diagram of a label setting apparatus according to an embodiment of the present disclosure.

As shown in fig. 7, the tag setting apparatus 700 may include an identification module 710, a determination module 720, a conversion module 730, and a setting module 740.

The recognition module 710 may be configured to recognize the call record file by using a language model to obtain a target text.

The determining module 720 may be configured to determine at least one target word in the target text.

A conversion module 730, which may be configured to convert the at least one target word into at least one standard word.

The setting module 740 may be configured to, in response to a selection operation of a user for a target standard word in the at least one standard word, set a tag for the call record file according to the target standard word.

Wherein the language model may be trained, for example, according to the method shown in the embodiments of the present disclosure.

According to an embodiment of the present disclosure, the determining module may include a determining sub-module, configured to determine, as the target word, a standard word and/or a spoken word included in the target text according to a set of standard words and spoken words, where the set of standard words and spoken words includes at least one standard word and at least one spoken word.

According to an embodiment of the present disclosure, the determining sub-module may be specifically configured to perform word matching on the target text by using each word in the standard word and spoken word set, respectively, to determine a standard word and/or a spoken word in the target text.

According to an embodiment of the present disclosure, the conversion module may include a conversion sub-module, which may be configured to, for each target word of the at least one target word, convert the target word into a corresponding standard word according to a word correspondence relationship in a case that the target word is a spoken word, where the word correspondence relationship is used to indicate a correspondence relationship between a standard word having the same meaning and the spoken word.

According to an embodiment of the present disclosure, the apparatus shown above may further include a presentation module, which may be configured to present the at least one standard word to the user.

According to an embodiment of the present disclosure, the apparatus shown above may further include a creating module and a recording module. The creating module is used for creating audio and video calls. And the recording module is used for recording the audio and video call to obtain the call record file.

It should be noted that, in the technical solution of the present disclosure, the acquisition, storage, application, and the like of the personal information of the related user all conform to the regulations of the relevant laws and regulations, and do not violate the common customs of the public order.

The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.

FIG. 8 illustrates a schematic block diagram of an example electronic device 800 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.

As shown in fig. 8, the apparatus 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The calculation unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.

A number of components in the device 800 are connected to the I/O interface 805, including: an input unit 806, such as a keyboard, a mouse, or the like; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, or the like; and a communication unit 809 such as a network card, modem, wireless communication transceiver, etc. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.

Computing unit 801 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and the like. The calculation unit 801 performs the respective methods and processes described above, such as the method of training a language model and/or the label setting method. For example, in some embodiments, the method of training a language model and/or the label setting method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 808. In some embodiments, part or all of the computer program can be loaded and/or installed onto device 800 via ROM 802 and/or communications unit 809. When loaded into RAM 803 and executed by computing unit 801, a computer program may perform one or more steps of the method of training a language model and/or the label setting method described above. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the method of training the language model and/or the label setting method by any other suitable means (e.g., by means of firmware).

Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.

Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.

In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.

To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.

The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.

The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server may be a cloud Server, which is also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service extensibility in a traditional physical host and a VPS service (Virtual Private Server, or VPS for short). The server may also be a server of a distributed system, or a server incorporating a blockchain.

It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.

The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:一种单词学习方法、装置、系统及计算设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!