Drivable three-dimensional character generation method and device, electronic equipment and storage medium

文档序号:9432 发布日期:2021-09-17 浏览:45次 中文

1. A drivable three-dimensional character generation method, comprising:

acquiring a three-dimensional human body mesh model corresponding to a two-dimensional picture to be processed;

carrying out bone embedding on the three-dimensional human body mesh model;

and carrying out skin binding on the three-dimensional human body mesh model subjected to bone embedding to obtain a drivable three-dimensional human body mesh model.

2. The method of claim 1, further comprising:

performing down-sampling processing on the three-dimensional human body mesh model;

and embedding the skeleton into the three-dimensional human body mesh model subjected to the down-sampling treatment.

3. The method according to claim 1 or 2, wherein said bone embedding said three-dimensional body mesh model comprises:

and carrying out bone embedding on the three-dimensional human body mesh model by utilizing a pre-constructed bone tree with N vertexes, wherein N is a positive integer greater than one.

4. The method of claim 3, further comprising:

acquiring an action sequence;

and generating a three-dimensional human body animation according to the action sequence and the drivable three-dimensional human body mesh model.

5. The method of claim 4, wherein the sequence of actions comprises: SMPL action sequence of skin multi-person linear model.

6. The method of claim 5, wherein said generating a three-dimensional body animation from said sequence of actions and said drivable three-dimensional body mesh model comprises:

migrating the SMPL action sequence to obtain action sequences of N key points, wherein the N key points are N vertexes in the skeleton tree;

and driving the drivable three-dimensional human body mesh model by using the action sequences of the N key points to obtain the three-dimensional human body animation.

7. An actuatable three-dimensional character generating apparatus comprising: the device comprises a first processing module, a second processing module and a third processing module;

the first processing module is used for acquiring a three-dimensional human body mesh model corresponding to a two-dimensional picture to be processed;

the second processing module is used for embedding bones into the three-dimensional human body mesh model;

and the third processing module is used for carrying out skin binding on the three-dimensional human body mesh model subjected to bone embedding to obtain a drivable three-dimensional human body mesh model.

8. The apparatus of claim 7, wherein,

the second processing module is further configured to perform downsampling on the three-dimensional human body mesh model, and perform bone embedding on the downsampled three-dimensional human body mesh model.

9. The apparatus of claim 7 or 8,

and the second processing module is used for embedding bones into the three-dimensional human body mesh model by utilizing a pre-constructed bone tree with N vertexes, wherein N is a positive integer greater than one.

10. The apparatus of claim 9, wherein,

the third processing module is further used for acquiring an action sequence and generating a three-dimensional human body animation according to the action sequence and the drivable three-dimensional human body mesh model.

11. The apparatus of claim 10, wherein the sequence of actions comprises: SMPL action sequence of skin multi-person linear model.

12. The apparatus of claim 11, wherein,

and the third processing module migrates the SMPL action sequence to obtain an action sequence of N key points, wherein the N key points are N vertexes in the skeleton tree, and the action sequence of the N key points is used for driving the drivable three-dimensional human body mesh model to obtain the three-dimensional human body animation.

13. An electronic device, comprising:

at least one processor; and

a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,

the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.

14. A non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method of any one of claims 1-6.

15. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-6.

Background

Currently, a three-dimensional (3D, 3Dimension) character that can be driven can be generated based on a single two-dimensional (2D, 2Dimension) picture, that is, a three-dimensional character driving (Image-based3D animation) based on a two-dimensional picture is realized.

To obtain a drivable three-dimensional character, the following implementations are generally adopted: based on an end-to-end training mode, aiming at any two-dimensional picture, a network model obtained by pre-training is utilized to directly generate a drivable three-dimensional human body mesh model, namely, the drivable three-dimensional human body mesh model can be generated through a semantic space, a semantic deformation field, a surface implicit function and the like obtained by pre-training. However, this type of model training is complicated, and requires a large amount of training resources.

Disclosure of Invention

The disclosure provides a method and an apparatus for generating a drivable three-dimensional character, an electronic device, and a storage medium.

A drivable three-dimensional character generation method, comprising:

acquiring a three-dimensional human body mesh model corresponding to a two-dimensional picture to be processed;

carrying out bone embedding on the three-dimensional human body mesh model;

and carrying out skin binding on the three-dimensional human body mesh model subjected to bone embedding to obtain a drivable three-dimensional human body mesh model.

An actuatable three-dimensional character generating apparatus comprising: the device comprises a first processing module, a second processing module and a third processing module;

the first processing module is used for acquiring a three-dimensional human body mesh model corresponding to a two-dimensional picture to be processed;

the second processing module is used for embedding bones into the three-dimensional human body mesh model;

and the third processing module is used for carrying out skin binding on the three-dimensional human body mesh model subjected to bone embedding to obtain a drivable three-dimensional human body mesh model.

An electronic device, comprising:

at least one processor; and

a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,

the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method as described above.

A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method as described above.

A computer program product comprising a computer program which, when executed by a processor, implements a method as described above.

One embodiment in the above disclosure has the following advantages or benefits: the method can firstly obtain the three-dimensional human body mesh model corresponding to the two-dimensional picture to be processed, and then can sequentially carry out bone embedding and skin binding treatment on the obtained three-dimensional human body mesh model, so that the drivable three-dimensional human body mesh model is obtained, instead of directly utilizing the pre-trained network model to generate the drivable three-dimensional human body mesh model in the existing mode, so that the consumption of resources is reduced, and the like.

It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.

Drawings

The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:

FIG. 1 is a flowchart illustrating a first exemplary embodiment of a method for generating a drivable three-dimensional character according to the present disclosure;

FIG. 2 is a flowchart illustrating a method for generating a drivable three-dimensional character according to a second embodiment of the present disclosure;

FIG. 3 is a schematic diagram of a three-dimensional human animation according to the present disclosure;

FIG. 4 is a schematic diagram illustrating an exemplary embodiment 400 of a drivable three-dimensional character generation apparatus according to the present disclosure;

FIG. 5 illustrates a schematic block diagram of an example electronic device 500 that can be used to implement embodiments of the present disclosure.

Detailed Description

Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.

In addition, it should be understood that the term "and/or" herein is merely one type of association relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.

Fig. 1 is a flowchart of a method for generating a drivable three-dimensional character according to a first embodiment of the present disclosure. As shown in fig. 1, the following detailed implementation is included.

In step 101, a three-dimensional human body mesh model corresponding to a two-dimensional picture to be processed is obtained.

In step 102, bone embedding is performed on the three-dimensional human mesh model.

In step 103, the three-dimensional mesh model after bone embedding is skin-bound to obtain a drivable three-dimensional mesh model.

In the scheme of the embodiment of the method, the three-dimensional human body mesh model corresponding to the two-dimensional picture to be processed can be obtained firstly, and then the obtained three-dimensional human body mesh model can be sequentially subjected to bone embedding and skin binding treatment, so that the drivable three-dimensional human body mesh model is obtained, rather than the drivable three-dimensional human body mesh model generated by directly utilizing the pre-trained network model in the existing mode, so that the consumption of resources and the like are reduced.

How to obtain the three-dimensional human body mesh model corresponding to the two-dimensional picture to be processed is not limited. For example, a three-dimensional Human body mesh model corresponding to a two-dimensional picture to be processed, such as a three-dimensional Human body mesh model including about 20 ten thousand (w) vertices and 40 ten thousand patches, may be obtained by using an algorithm such as a Pixel-Aligned Implicit Function (PIFu) or a Multi-Level Pixel-Aligned Implicit Function (PIFuHD) for High-Resolution three-dimensional Human body Digitization.

The obtained three-dimensional human body mesh model can be directly subjected to subsequent processing, such as bone embedding and the like. Preferably, the down-sampling processing may be performed on the obtained three-dimensional human body mesh model, and then the bone embedding may be performed on the down-sampled three-dimensional human body mesh model.

Through the downsampling processing, the three-dimensional human body mesh model with less vertexes and patches can be obtained, so that the time consumed by subsequent processing can be reduced, the processing efficiency is improved, and the like.

The specific value of the down-sampling can be determined according to actual needs, such as actual resource requirements. In addition, how to perform downsampling is not limited. For example, the down-sampling processing may be performed on the obtained three-dimensional human body mesh model by using an algorithm such as edge collapse (edge collapse), quadratic error reduction (quadratic error simplification), or isotropic mesh reconstruction (isotropic reconstruction).

And then, carrying out bone embedding and skin binding treatment on the three-dimensional human body mesh model subjected to the down-sampling treatment in sequence.

The three-dimensional human body mesh model can be subjected to bone embedding by utilizing a pre-constructed bone tree with N vertexes, N is a positive integer larger than one, and specific values can be determined according to actual needs.

The nature of the skeleton tree is that sets of xyz coordinates, and how to define a skeleton tree of N vertices is prior art. In addition, how to embed the bone into the three-dimensional human body mesh model by using the bone tree is not limited, for example, the bone embedding can be realized by using a pre-trained network model, that is, the pre-constructed bone tree with N vertices and the three-dimensional human body mesh model can be used as input to obtain the three-dimensional human body mesh model which is output by the network model and is subjected to bone embedding.

By means of the constructed skeleton tree, the three-dimensional human body mesh model subjected to skeleton embedding can be accurately and efficiently obtained in the mode, and therefore a good foundation is laid for subsequent processing.

And further performing skin binding treatment on the three-dimensional human body mesh model subjected to bone embedding, namely respectively giving a weight corresponding to the bone position to the N vertexes so as to obtain a drivable three-dimensional human body mesh model.

If the weight assignment is accurate, the skin will not be seriously torn or deformed when the subsequent bones move, and the appearance is more natural.

How to perform skin binding on the three-dimensional human body mesh model is also not limited, for example, the skin binding can be realized by using a network model obtained by pre-training.

After the series of processing, the required drivable three-dimensional human body mesh model can be obtained. Preferably, based on the obtained drivable three-dimensional human body mesh model, a three-dimensional human body animation can be further generated.

Accordingly, fig. 2 is a flowchart of a second embodiment of the method for generating a drivable three-dimensional character according to the present disclosure. As shown in fig. 2, the following detailed implementation is included.

In step 201, a three-dimensional human body mesh model corresponding to a two-dimensional picture to be processed is obtained.

For example, algorithms such as PIFu or PIFuHD may be adopted to generate a corresponding three-dimensional human mesh model for a two-dimensional picture to be processed.

In step 202, bone embedding is performed on the three-dimensional human mesh model.

For the three-dimensional human mesh model obtained in step 201, the subsequent processing, such as bone embedding, may be directly performed on the three-dimensional human mesh model.

Or, the three-dimensional human body mesh model obtained in step 201 may be subjected to down-sampling, and then the three-dimensional human body mesh model subjected to down-sampling may be subjected to bone embedding.

The three-dimensional human body mesh model can be subjected to bone embedding by utilizing pre-constructed bone trees with N vertexes, wherein N is a positive integer larger than one.

In step 203, the three-dimensional human body mesh model after bone embedding is subjected to skin binding, and a drivable three-dimensional human body mesh model is obtained.

After the bone embedding and the skin binding treatment are completed in sequence, a drivable three-dimensional human body mesh model can be obtained. Based on the obtained drivable three-dimensional human body mesh model, three-dimensional human body animation can be further generated.

In step 204, a sequence of actions is obtained.

Preferably, the motion sequence may be a skin multiplayer Linear Model (SMPL) motion sequence.

How to generate SMPL action sequences is prior art.

In step 205, a three-dimensional body animation is generated from the sequence of actions and the drivable three-dimensional body mesh model.

Specifically, the SMPL action sequence may be migrated first to obtain an action sequence of N key points, where the N key points are N vertices in the skeleton tree, and then the action sequence of the N key points may be used to drive a drivable three-dimensional human mesh model, so as to obtain a desired three-dimensional human animation.

The normalized SMPL action sequence usually corresponds to 24 key points, and N is not usually 24, for example 17, then the SMPL action sequence needs to be migrated to a skeleton tree of N vertices (key points) to obtain an action sequence of N key points.

Note that, if the value of N is 24, the migration process need not be performed.

How to obtain the action sequence of N key points is not limited, for example, various existing action migration methods may be adopted, or a network model obtained by pre-training may be used, and the input is an SMPL action sequence and the output is an action sequence of N key points.

When the network model is trained, the loss function may be defined as the euclidean distance of the corresponding key point in the three-dimensional space, where the corresponding key point refers to a matched key point, for example, 17 (N) key points out of 24 key points corresponding to the SMPL action sequence are matched with N key points in the skeleton tree, and then the remaining 7 key points are unmatched key points, and for the unmatched key points, the weight of the position difference may be reduced or directly set to 0, etc.

After the action sequences of the N key points are obtained, the action sequences of the N key points can be used for driving the three-dimensional human body mesh model which is obtained in the past and can be driven, and therefore the three-dimensional human body animation is obtained. As shown in fig. 3, fig. 3 is a schematic diagram of a three-dimensional human animation according to the present disclosure.

It can be seen from the above description that the drivable three-dimensional human body mesh model disclosed by the present disclosure is compatible with the standardized SMPL action sequence, and can accurately and efficiently generate the corresponding three-dimensional human body animation according to the drivable three-dimensional human body mesh model and the SMPL action sequence.

In summary, a pipeline (pipeline) is constructed in the method disclosed by the present disclosure, and drivable three-dimensional human body mesh models and three-dimensional human body animations can be generated for any input two-dimensional pictures and SMPL action sequences, although some network models may be used, these network models are relatively simple, and compared with a method in the prior art in which drivable three-dimensional human body mesh models are generated directly by using trained network models, consumption of resources is reduced, and the method is applicable to any person wearing clothes and any action sequence, and has wide applicability and the like.

It is noted that while for simplicity of explanation, the foregoing method embodiments are described as a series of acts, those skilled in the art will appreciate that the present disclosure is not limited by the order of acts, as some steps may, in accordance with the present disclosure, occur in other orders and concurrently. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required for the disclosure. In addition, for parts which are not described in detail in a certain embodiment, reference may be made to relevant descriptions in other embodiments.

The above is a description of embodiments of the method, and the embodiments of the apparatus are further described below.

FIG. 4 is a schematic diagram illustrating an exemplary embodiment 400 of a device for generating a drivable three-dimensional character according to the disclosure. As shown in fig. 4, includes: a first processing module 401, a second processing module 403 and a third processing module 403.

The first processing module 401 is configured to obtain a three-dimensional human body mesh model corresponding to a two-dimensional picture to be processed.

And a second processing module 402, configured to perform bone embedding on the acquired three-dimensional human body mesh model.

And a third processing module 403, configured to perform skin binding on the three-dimensional human body mesh model after bone embedding, so as to obtain a drivable three-dimensional human body mesh model.

How the first processing module 401 obtains the three-dimensional human body mesh model corresponding to the two-dimensional picture to be processed is not limited. For example, a three-dimensional human body mesh model corresponding to a two-dimensional picture to be processed can be obtained by using an algorithm such as PIFu or PIFuHD.

For the obtained three-dimensional human mesh model, the second processing module 402 may directly perform subsequent processing on the three-dimensional human mesh model, such as bone embedding on the three-dimensional human mesh model. Preferably, the second processing module 402 may further perform down-sampling on the obtained three-dimensional human body mesh model first, and then perform bone embedding and the like on the down-sampled three-dimensional human body mesh model.

How the down-sampling is performed is also not limiting. For example, the down-sampling processing can be performed on the obtained three-dimensional human body mesh model by using an algorithm such as edge collapse, quadratic error location, or isotropic repositioning.

In addition, the second processing module 402 can perform bone embedding on the three-dimensional human mesh model by using a pre-constructed bone tree with N vertices, where N is a positive integer greater than one.

How to use the skeleton tree to embed the skeleton of the three-dimensional human body mesh model is not limited, for example, the skeleton embedding can be realized by using a pre-trained network model, that is, the pre-constructed skeleton tree with N vertexes and the three-dimensional human body mesh model can be used as input, and the three-dimensional human body mesh model which is output by the network model and is subjected to skeleton embedding is obtained.

For the three-dimensional human mesh model after bone embedding, the third processing module 403 may further perform skin binding processing on the three-dimensional human mesh model, that is, a weight corresponding to the bone position is given to each of the N vertices, so as to obtain a drivable three-dimensional human mesh model. How to perform skin binding on the three-dimensional human body mesh model is also not limited, for example, the skin binding can be realized by using a network model obtained by pre-training.

After the series of processing, the required drivable three-dimensional human body mesh model can be obtained. Preferably, based on the obtained drivable three-dimensional human body mesh model, a three-dimensional human body animation can be further generated.

Accordingly, the third processing module 403 may be further configured to obtain an action sequence, and generate a three-dimensional human body animation according to the obtained action sequence and the drivable three-dimensional human body mesh model.

Wherein the action sequence may be an SMPL action sequence.

For the SMPL action sequence, the third processing module 403 may first migrate the SMPL action sequence to obtain an action sequence of N key points, where the N key points are N vertices in the skeleton tree, and then may drive the drivable three-dimensional human body mesh model by using the action sequence of the N key points to obtain the required three-dimensional human body animation.

The normalized SMPL action sequence usually corresponds to 24 key points, and N is not usually 24, for example 17, then the SMPL action sequence needs to be migrated to a skeleton tree of N vertices (key points) to obtain an action sequence of N key points.

After the action sequences of the N key points are obtained, the action sequences of the N key points can be used for driving the three-dimensional human body mesh model which is obtained in the past and can be driven, and therefore the three-dimensional human body animation which is finally needed is obtained.

For a specific work flow of the apparatus embodiment shown in fig. 4, reference is made to the related description in the foregoing method embodiment, and details are not repeated.

In short, according to the embodiments of the apparatus of the present disclosure, a drivable three-dimensional human body mesh model and a drivable three-dimensional human body animation can be generated for any input two-dimensional picture and SMPL motion sequence, and although some network models may be used, these network models are relatively simple, and compared with a method of generating a drivable three-dimensional human body mesh model by directly using a trained network model in the prior art, the method reduces resource consumption, is applicable to any clothed human body and any motion sequence, and has wide applicability and the like.

The scheme disclosed by the invention can be applied to the field of artificial intelligence, in particular to the fields of computer vision, deep learning and the like.

Artificial intelligence is a subject for studying a computer to simulate some thinking processes and intelligent behaviors (such as learning, reasoning, thinking, planning and the like) of a human, and has a hardware technology and a software technology, the artificial intelligence hardware technology generally comprises technologies such as a sensor, a special artificial intelligence chip, cloud computing, distributed storage, big data processing and the like, and the artificial intelligence software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, machine learning/deep learning, a big data processing technology, a knowledge graph technology and the like.

The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.

FIG. 5 illustrates a schematic block diagram of an example electronic device 500 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.

As shown in fig. 5, the apparatus 500 comprises a computing unit 501 which may perform various appropriate actions and processes in accordance with a computer program stored in a Read Only Memory (ROM)502 or a computer program loaded from a storage unit 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the device 500 can also be stored. The calculation unit 501, the ROM 502, and the RAM 503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.

A number of components in the device 500 are connected to the I/O interface 505, including: an input unit 506 such as a keyboard, a mouse, or the like; an output unit 507 such as various types of displays, speakers, and the like; a storage unit 508, such as a magnetic disk, optical disk, or the like; and a communication unit 509 such as a network card, modem, wireless communication transceiver, etc. The communication unit 509 allows the device 500 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.

The computing unit 501 may be a variety of general-purpose and/or special-purpose processing components having processing and computing capabilities. Some examples of the computing unit 501 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The computing unit 501 performs the various methods and processes described above, such as the methods described in this disclosure. For example, in some embodiments, the methods described in this disclosure may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 508. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 500 via the ROM 502 and/or the communication unit 509. When loaded into RAM 503 and executed by computing unit 501, may perform one or more steps of the methods described in the present disclosure. Alternatively, in other embodiments, the computing unit 501 may be configured by any other suitable means (e.g., by means of firmware) to perform the methods described by the present disclosure.

Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.

Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.

In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.

To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.

The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.

The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and Virtual Private Server (VPS). The server may also be a server of a distributed system, or a server incorporating a blockchain. Cloud computing refers to accessing an elastically extensible shared physical or virtual resource pool through a network, resources can include servers, operating systems, networks, software, applications, storage devices and the like, a technical system for deploying and managing the resources in a self-service mode as required can be achieved, and efficient and powerful data processing capacity can be provided for technical applications and model training of artificial intelligence, block chains and the like through a cloud computing technology.

It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.

The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:基于运动数据重定向的内容生成方法、装置及计算机设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!