VR interaction method, system, mobile terminal and computer readable storage medium

文档序号:7033 发布日期:2021-09-17 浏览:32次 中文

1. A VR interaction method, the method comprising:

determining a scene to be interacted according to a VR interaction instruction sent by a user, and projecting a three-dimensional image of the scene to be interacted to obtain a VR interaction scene;

acquiring action characteristics of the user in the VR interaction scene to obtain action acquisition characteristics, and performing time sequence synchronization processing on the action acquisition characteristics to obtain action time sequence characteristics, wherein the action acquisition characteristics comprise skeleton position characteristics and action force characteristics, and the action time sequence characteristics comprise skeleton time sequence characteristics and force time sequence characteristics;

determining a somatosensory posture of the user according to the skeleton time sequence characteristics, and determining a posture strength corresponding to the somatosensory posture according to the strength time sequence characteristics;

and generating VR interactive information according to the somatosensory posture and the posture strength, and executing VR interactive operation on the VR interactive scene according to the VR interactive information.

2. The VR interaction method of claim 1, wherein said capturing the motion features of the user within the VR interaction scenario, resulting in motion capture features, comprises:

acquiring images of the user in the VR interactive scene to obtain acquired images, and extracting features of the acquired images to obtain image features;

performing attitude estimation according to the image characteristics to obtain the skeleton position characteristics, wherein the skeleton position characteristics comprise corresponding relations between different skeleton key points and corresponding position coordinates;

and acquiring force and touch sense of the user in the VR interaction scene to obtain a force and touch sense acquisition signal, determining an action force value of the user according to the force and touch sense acquisition signal to obtain an action force characteristic, wherein the action force characteristic comprises a corresponding relation between different action force acquisition points and corresponding action force values on the user.

3. The VR interaction method of claim 2, wherein said performing timing synchronization on the motion capture features to obtain motion timing features comprises:

respectively determining the acquisition frequency of the image acquisition and the force touch acquisition, and determining a target time sequence according to the determined acquisition frequency, wherein the target time sequence comprises the same acquisition time point between the acquired image and the force touch acquisition signal;

according to the target time sequence, feature screening is conducted on the skeleton position features and the action force features respectively, and according to the collection time points, the skeleton position features and the action force features after feature screening are sorted respectively to obtain the skeleton time sequence features and the force time sequence features.

4. The VR interaction method of claim 2, wherein the determining the somatosensory gesture of the user from the skeletal temporal feature comprises:

determining the movement track of each skeleton key point according to the skeleton time sequence characteristics, and combining the movement tracks of different skeleton key points to obtain a combined track;

and matching the combined track with a pre-stored somatosensory posture query table to obtain the somatosensory posture corresponding to the combined track, wherein the somatosensory posture query table stores the corresponding relation between different combined tracks and somatosensory postures.

5. The VR interaction method of claim 1, wherein the generating VR interaction information from the somatosensory gesture and the gesture dynamics includes:

determining the gesture duration of the somatosensory gesture, matching the gesture duration of the somatosensory gesture, the gesture identification and the gesture strength with a pre-stored interaction information query table to obtain the VR interaction information, wherein the interaction information query table stores corresponding relations among different gesture durations, different gesture identifications, different gesture strengths and corresponding VR interaction information.

6. The VR interaction method of claim 5, wherein performing VR interaction operations on the VR interaction scene based on the VR interaction information includes:

acquiring a scene identifier of the VR interactive scene, and determining scene gradient information according to the scene identifier and the VR interactive information, wherein the scene gradient information comprises a VR gradient image and an interactive response corresponding to the scene identifier and the VR interactive information;

and performing image gradient operation on the VR interactive scene according to the VR gradient image in the scene gradient information, and performing information response on the user according to the interactive response in the scene gradient information.

7. The VR interaction method of claim 2, wherein after the capturing the image of the user within the VR interaction scene to obtain the captured image, further comprising:

performing convolution processing on the acquired image according to a preset corrosion operator, and determining a mapping region corresponding to the corrosion operator in the acquired image after the convolution processing;

and acquiring the minimum value of the pixel points in the mapping region, and replacing the specified pixel points in the acquired image by the minimum value of the pixel points.

8. A VR interaction system, the system comprising:

the image projection module is used for determining a scene to be interacted according to a VR interaction instruction sent by a user and performing three-dimensional image projection on the scene to be interacted to obtain a VR interaction scene;

the feature acquisition module is used for acquiring action features of the user in the VR interaction scene to obtain action acquisition features, and performing time sequence synchronous processing on the action acquisition features to obtain action time sequence features, wherein the action acquisition features comprise skeleton position features and action force characteristics, and the action time sequence features comprise skeleton time sequence features and force time sequence features;

the motion sensing posture determining module is used for determining a motion sensing posture of the user according to the skeleton time sequence characteristics and determining a posture strength corresponding to the motion sensing posture according to the strength time sequence characteristics;

and the VR interaction module is used for generating VR interaction information according to the somatosensory posture and the posture force and executing VR interaction operation on the VR interaction scene according to the VR interaction information.

9. A mobile terminal comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the computer program implements the steps of:

determining a scene to be interacted according to a VR interaction instruction sent by a user, and projecting a three-dimensional image of the scene to be interacted to obtain a VR interaction scene;

acquiring action characteristics of the user in the VR interaction scene to obtain action acquisition characteristics, and performing time sequence synchronization processing on the action acquisition characteristics to obtain action time sequence characteristics, wherein the action acquisition characteristics comprise skeleton position characteristics and action force characteristics, and the action time sequence characteristics comprise skeleton time sequence characteristics and force time sequence characteristics;

and determining the somatosensory posture of the user according to the skeleton time sequence characteristics, and determining the posture strength corresponding to the somatosensory posture according to the strength time sequence characteristics.

10. A computer-readable storage medium for VR interaction, having a computer program stored thereon, the computer program, when executed by a processor, performing the steps of: determining a scene to be interacted according to a VR interaction instruction sent by a user, and projecting a three-dimensional image of the scene to be interacted to obtain a VR interaction scene;

acquiring action characteristics of the user in the VR interaction scene to obtain action acquisition characteristics, and performing time sequence synchronization processing on the action acquisition characteristics to obtain action time sequence characteristics, wherein the action acquisition characteristics comprise skeleton position characteristics and action force characteristics, and the action time sequence characteristics comprise skeleton time sequence characteristics and force time sequence characteristics;

and determining the somatosensory posture of the user according to the skeleton time sequence characteristics, and determining the posture strength corresponding to the somatosensory posture according to the strength time sequence characteristics.

Background

Virtual Reality (VR) technology is an information technology that constructs an immersive human-computer interaction environment based on computable information, and a computer is used to create an artificial Virtual environment, which is a comprehensive sensing artificial environment that is mainly based on visual perception and includes auditory perception and tactile perception, and people can sense a Virtual world of the computer through various sensory channels such as visual perception, auditory perception, tactile perception, acceleration and the like, and can interact with the Virtual world through the most natural ways such as movement, voice, expression, gestures, sight and the like, thereby creating an experience of being personally on the scene.

In the existing VR interaction process, a user interacts with a VR interaction scene based on an operation handle, when the interaction operation steps required to be input by the user are complex, the operation of the user on the operation handle is complex, and the use experience of the user is reduced.

Disclosure of Invention

Embodiments of the present invention provide a VR interaction method, system, mobile terminal and computer readable storage medium, and aim to solve the problem of low user experience caused by complicated operation of a user on an operation handle in the existing VR interaction process.

The embodiment of the invention is realized in such a way that a VR interaction method comprises the following steps:

determining a scene to be interacted according to a VR interaction instruction sent by a user, and projecting a three-dimensional image of the scene to be interacted to obtain a VR interaction scene;

acquiring action characteristics of the user in the VR interaction scene to obtain action acquisition characteristics, and performing time sequence synchronization processing on the action acquisition characteristics to obtain action time sequence characteristics, wherein the action acquisition characteristics comprise skeleton position characteristics and action force characteristics, and the action time sequence characteristics comprise skeleton time sequence characteristics and force time sequence characteristics;

determining a somatosensory posture of the user according to the skeleton time sequence characteristics, and determining a posture strength corresponding to the somatosensory posture according to the strength time sequence characteristics;

and generating VR interactive information according to the somatosensory posture and the posture strength, and executing VR interactive operation on the VR interactive scene according to the VR interactive information.

Preferably, the acquiring the action features of the user in the VR interaction scene to obtain the action acquisition features includes:

acquiring images of the user in the VR interactive scene to obtain acquired images, and extracting features of the acquired images to obtain image features;

performing attitude estimation according to the image characteristics to obtain the skeleton position characteristics, wherein the skeleton position characteristics comprise corresponding relations between different skeleton key points and corresponding position coordinates;

and acquiring force and touch sense of the user in the VR interaction scene to obtain a force and touch sense acquisition signal, determining an action force value of the user according to the force and touch sense acquisition signal to obtain an action force characteristic, wherein the action force characteristic comprises a corresponding relation between different action force acquisition points and corresponding action force values on the user.

Preferably, the performing timing synchronization processing on the motion acquisition feature to obtain a motion timing feature includes:

respectively determining the acquisition frequency of the image acquisition and the force touch acquisition, and determining a target time sequence according to the determined acquisition frequency, wherein the target time sequence comprises the same acquisition time point between the acquired image and the force touch acquisition signal;

according to the target time sequence, feature screening is conducted on the skeleton position features and the action force features respectively, and according to the collection time points, the skeleton position features and the action force features after feature screening are sorted respectively to obtain the skeleton time sequence features and the force time sequence features.

Preferably, the determining the somatosensory posture of the user according to the skeleton timing characteristics includes:

determining the movement track of each skeleton key point according to the skeleton time sequence characteristics, and combining the movement tracks of different skeleton key points to obtain a combined track;

and matching the combined track with a pre-stored somatosensory posture query table to obtain the somatosensory posture corresponding to the combined track, wherein the somatosensory posture query table stores the corresponding relation between different combined tracks and somatosensory postures.

Preferably, the generating VR interaction information according to the somatosensory posture and the posture dynamics includes:

determining the gesture duration of the somatosensory gesture, matching the gesture duration of the somatosensory gesture, the gesture identification and the gesture strength with a pre-stored interaction information query table to obtain the VR interaction information, wherein the interaction information query table stores corresponding relations among different gesture durations, different gesture identifications, different gesture strengths and corresponding VR interaction information.

Preferably, the performing VR interaction operation on the VR interaction scene according to the VR interaction information includes:

acquiring a scene identifier of the VR interactive scene, and determining scene gradient information according to the scene identifier and the VR interactive information, wherein the scene gradient information comprises a VR gradient image and an interactive response corresponding to the scene identifier and the VR interactive information;

and performing image gradient operation on the VR interactive scene according to the VR gradient image in the scene gradient information, and performing information response on the user according to the interactive response in the scene gradient information.

Preferably, after the acquiring the image of the user in the VR interactive scene to obtain the acquired image, the method further includes:

performing convolution processing on the acquired image according to a preset corrosion operator, and determining a mapping region corresponding to the corrosion operator in the acquired image after the convolution processing;

and acquiring the minimum value of the pixel points in the mapping region, and replacing the specified pixel points in the acquired image by the minimum value of the pixel points.

It is another object of an embodiment of the present invention to provide a VR interaction system, including:

the image projection module is used for determining a scene to be interacted according to a VR interaction instruction sent by a user and performing three-dimensional image projection on the scene to be interacted to obtain a VR interaction scene;

the feature acquisition module is used for acquiring action features of the user in the VR interaction scene to obtain action acquisition features, and performing time sequence synchronous processing on the action acquisition features to obtain action time sequence features, wherein the action acquisition features comprise skeleton position features and action force characteristics, and the action time sequence features comprise skeleton time sequence features and force time sequence features;

the motion sensing posture determining module is used for determining a motion sensing posture of the user according to the skeleton time sequence characteristics and determining a posture strength corresponding to the motion sensing posture according to the strength time sequence characteristics;

and the VR interaction module is used for generating VR interaction information according to the somatosensory posture and the posture force and executing VR interaction operation on the VR interaction scene according to the VR interaction information.

Another object of an embodiment of the present invention is to provide a mobile terminal, including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the computer program to implement the following steps:

determining a scene to be interacted according to a VR interaction instruction sent by a user, and projecting a three-dimensional image of the scene to be interacted to obtain a VR interaction scene;

acquiring action characteristics of the user in the VR interaction scene to obtain action acquisition characteristics, and performing time sequence synchronization processing on the action acquisition characteristics to obtain action time sequence characteristics, wherein the action acquisition characteristics comprise skeleton position characteristics and action force characteristics, and the action time sequence characteristics comprise skeleton time sequence characteristics and force time sequence characteristics;

and determining the somatosensory posture of the user according to the skeleton time sequence characteristics, and determining the posture strength corresponding to the somatosensory posture according to the strength time sequence characteristics.

It is another object of an embodiment of the present invention to provide a computer readable storage medium for VR interaction, having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the following steps: determining a scene to be interacted according to a VR interaction instruction sent by a user, and projecting a three-dimensional image of the scene to be interacted to obtain a VR interaction scene;

acquiring action characteristics of the user in the VR interaction scene to obtain action acquisition characteristics, and performing time sequence synchronization processing on the action acquisition characteristics to obtain action time sequence characteristics, wherein the action acquisition characteristics comprise skeleton position characteristics and action force characteristics, and the action time sequence characteristics comprise skeleton time sequence characteristics and force time sequence characteristics;

and determining the somatosensory posture of the user according to the skeleton time sequence characteristics, and determining the posture strength corresponding to the somatosensory posture according to the strength time sequence characteristics.

The embodiment of the invention determines the scene to be interacted through the VR interactive instruction sent by the user, can effectively project and generate the VR interactive scene based on the determined scene to be interacted, can effectively acquire the skeleton position characteristic and the action force characteristic of the user by acquiring the action characteristic of the user in the VR interactive scene, can effectively represent the posture of the user based on the acquired skeleton position characteristic, can effectively represent the force size of the posture of the user based on the acquired action force characteristic, can effectively adjust the skeleton position characteristic and the action force characteristic to the same acquisition frequency by performing time sequence synchronous processing on the action acquisition characteristic, improves the time sequence consistency between the skeleton time sequence characteristic and the force time sequence characteristic, can effectively determine the somatosensory posture of the user based on the skeleton time sequence characteristic, and can effectively determine the posture corresponding to each somatosensory posture based on the force time sequence characteristic, according to the body feeling posture and the posture strength, VR interaction information corresponding to the user posture can be effectively generated, VR interaction operation is performed on a VR interaction scene through the VR interaction information, so that the user can directly perform VR interaction based on the operation posture, VR interaction is performed without adopting an operation handle, and the use experience of the user is improved.

Drawings

Fig. 1 is a flowchart of a VR interaction method according to a first embodiment of the present invention;

FIG. 2 is a flowchart of a VR interaction method provided by a second embodiment of the invention;

fig. 3 is a schematic structural diagram of a VR interaction system according to a third embodiment of the present invention;

fig. 4 is a schematic structural diagram of a mobile terminal according to a fourth embodiment of the present invention.

Detailed Description

The advantages of the invention are further illustrated in the following description of specific embodiments in conjunction with the accompanying drawings.

Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.

The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.

It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.

In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in themselves. Thus, "module" and "component" may be used in a mixture.

Example one

Referring to fig. 1, a flowchart of a VR interaction method according to a first embodiment of the present invention is shown, where the VR interaction method may be applied to any mobile terminal, where the mobile terminal includes a mobile phone, a tablet, or a wearable smart device, and the VR interaction method includes the steps of:

step S10, determining a scene to be interacted according to a VR interaction instruction sent by a user, and projecting a three-dimensional image of the scene to be interacted to obtain a VR interaction scene;

in this step, a to-be-interacted scene corresponding to the VR interactive instruction is obtained by obtaining an instruction identifier in the VR interactive instruction and matching the instruction identifier with a pre-stored scene lookup table, where the instruction identifier may be stored in the VR interactive instruction in a manner of numbers, or letters, for example, when the VR interactive instruction is transmitted in a manner of a voice instruction, the voice instruction is subjected to voice translation to obtain a voice text, and an instruction identifier in the voice text is extracted.

In this step, the scene to be interacted is a scene image, and the determined scene to be interacted is subjected to three-dimensional image rendering, so that the effect of performing three-dimensional image projection on the scene to be interacted is achieved, and a VR interaction scene corresponding to the VR interaction instruction is obtained.

Step S20, acquiring action characteristics of the user in the VR interactive scene to obtain action acquisition characteristics, and performing time sequence synchronization processing on the action acquisition characteristics to obtain action time sequence characteristics;

wherein, this action collection characteristic includes skeleton position characteristic and action dynamics characteristic, this action chronogenesis characteristic includes skeleton chronogenesis characteristic and dynamics chronogenesis characteristic, this skeleton chronogenesis characteristic includes different skeleton position characteristic and corresponds the corresponding relation of gathering between the time point, this dynamics chronogenesis characteristic includes different action dynamics characteristic and corresponds the relation to between the time point of gathering, in this step, through carrying out the action characteristic collection to the user in the mutual scene of VR, can gather user's skeleton position characteristic and action dynamics characteristic effectively, can characterize user's posture effectively based on the skeleton position characteristic of gathering, can characterize user's posture's dynamics size effectively based on the action dynamics characteristic of gathering.

In the step, the motion acquisition characteristics are subjected to time sequence synchronous processing, so that the skeleton position characteristics and the motion strength characteristics can be effectively adjusted to the same acquisition frequency, the consistency of the time sequences between the skeleton time sequence characteristics and the strength time sequence characteristics is improved, and optionally, in the step, a high frame rate information down-sampling mode can be adopted, so that the skeleton position characteristics and the motion strength characteristics are at the same acquisition frame rate, and the consistency of the time sequences between the skeleton position characteristics and the motion strength characteristics is further ensured.

Step S30, determining a somatosensory posture of the user according to the skeleton time sequence characteristics, and determining a posture strength corresponding to the somatosensory posture according to the strength time sequence characteristics;

the skeleton position features comprise corresponding relations between different skeleton key points and corresponding position coordinates, therefore, the position coordinates of the different skeleton key points on different acquisition time points can be effectively determined based on the skeleton time sequence features, the moving track of each skeleton key point can be effectively obtained based on the position coordinates of the different skeleton key points on the different acquisition time points, the somatosensory posture of a user can be effectively determined based on the moving track of each skeleton key point, the somatosensory posture is the operation posture of the user, the skeleton key points can be set according to requirements, and for example, the skeleton key points comprise key points such as a palm, a sole or a finger.

In this step, after the skeleton position feature and the action force feature are subjected to time sequence synchronization processing, the acquisition time points in the skeleton time sequence feature and the force time sequence feature are the same, so that the action force feature corresponding to the acquisition time point in the force time sequence feature is determined based on the determined acquisition time point of the somatosensory posture, and the posture force of the somatosensory posture is obtained.

Step S40, VR interaction information is generated according to the somatosensory posture and the posture strength, and VR interaction operation is carried out on the VR interaction scene according to the VR interaction information;

optionally, in this step, the generating VR interaction information according to the somatosensory posture and the posture strength includes:

determining the gesture duration of the somatosensory gesture, and matching the gesture duration, the gesture identification and the gesture strength of the somatosensory gesture with a pre-stored interaction information lookup table to obtain VR interaction information;

in the step, the gesture duration of the somatosensory gesture is determined by inquiring the acquisition time point corresponding to the somatosensory gesture in the skeleton time sequence characteristic, and the VR interaction information corresponding to the somatosensory gesture and the gesture force in the combined state of the somatosensory gesture and the gesture force can be effectively determined by matching the gesture duration, the gesture identification and the gesture force of the somatosensory gesture with a pre-stored interaction information inquiry table.

Further, in this step, the performing VR interaction operation on the VR interaction scene according to the VR interaction information includes:

acquiring a scene identifier of the VR interactive scene, and determining scene gradual change information according to the scene identifier and the VR interactive information;

scene identification among different VR interactive scenes is different, and scene gradient information comprises the scene identification and VR gradient images and interactive responses corresponding to the VR interactive information;

performing image gradient operation on the VR interactive scene according to the VR gradient image in the scene gradient information, and performing information response on the user according to the interactive response in the scene gradient information;

in the step, information response is carried out on the user through interactive response in the scene gradient information, interactive feedback can be effectively carried out on operation instructions under the combination of different somatosensory postures and posture dynamics, and the interactive response comprises response modes such as voice response, vibration response or rotation response.

In this embodiment, a to-be-interacted scene is determined through a VR interaction instruction sent by a user, the VR interaction scene can be effectively generated by projection based on the determined to-be-interacted scene, a user in the VR interaction scene can be effectively acquired by collecting motion characteristics, a skeleton position characteristic and a motion force characteristic of the user can be effectively collected, a posture of the user can be effectively represented based on the collected skeleton position characteristic, a force magnitude of the posture of the user can be effectively represented based on the collected motion force characteristic, the skeleton position characteristic and the motion force characteristic can be effectively adjusted to the same collection frequency by performing time sequence synchronization processing on the motion collection characteristic, time sequence consistency between the skeleton time sequence characteristic and the force time sequence characteristic is improved, a somatosensory posture of the user can be effectively determined based on the skeleton time sequence characteristic, and a posture force corresponding to each somatosensory posture can be effectively determined based on the force time sequence characteristic, according to the body feeling posture and the posture strength, VR interaction information corresponding to the user posture can be effectively generated, VR interaction operation is performed on a VR interaction scene through the VR interaction information, so that the user can directly perform VR interaction based on the operation posture, VR interaction is performed without adopting an operation handle, and the use experience of the user is improved.

Example two

Please refer to fig. 2, which is a flowchart of a VR interaction method according to a second embodiment of the present invention, where the VR interaction method further details step S20, and includes the steps of:

step S21, acquiring images of the user in the VR interactive scene to obtain acquired images, and extracting the characteristics of the acquired images to obtain image characteristics;

the collected image can be acquired by acquiring images of a user in a VR interactive scene in real time based on any image acquisition device, for example, any shooting device with a camera;

in this step, the collected image is input into the pre-trained convolutional network for feature extraction to obtain the image feature, and the convolutional network may be set as required, for example, the convolutional network may be set as a vgg (visual Geometry group) network, and the collected image is input into the pre-trained convolutional network for feature extraction to extract the image feature corresponding to the user in the collected image.

Optionally, in this step, after the image acquisition is performed on the user in the VR interactive scene to obtain an acquired image, the method further includes:

performing convolution processing on the acquired image according to a preset corrosion operator, and determining a mapping region corresponding to the corrosion operator in the acquired image after the convolution processing;

acquiring the minimum value of the pixel points in the mapping region, and replacing the specified pixel points in the acquired image by the minimum value of the pixel points;

the size of the preset corrosion operator can be set according to requirements, and in the step, the minimum value of the pixel points in the mapping region is obtained by respectively obtaining the pixel value of each pixel point in the mapping region and extracting the minimum value of the pixel values of the pixel points.

In the step, the specified pixel points in the collected image are replaced by the minimum value of the pixel points, so that the effects of eliminating boundary points and meaningless pixel points and shrinking the boundary to the inside of the collected image are achieved, and the image quality of the collected image is improved.

Step S22, carrying out attitude pre-estimation according to the image characteristics to obtain the skeleton position characteristics;

in the step, the skeleton position characteristics are obtained by inputting image characteristics into a pre-trained posture estimation network for posture analysis, and the posture estimation network can adopt a lightweight posture estimation network (SNN network).

Step S23, performing haptic acquisition on the user in the VR interaction scene to obtain a haptic acquisition signal, and determining an action force value of the user according to the haptic acquisition signal to obtain the action force characteristic;

in the step, the force and touch collected signals can be collected based on a force and touch sensor mode, and the action force characteristic is obtained by determining the change value of the signals in the force and touch collected signals and determining the action force value of the user based on the determined change value of the signals.

Optionally, in this step, both the position and the number of the motion force acquisition points may be set according to requirements, for example, the motion force acquisition points may be set at the positions of the palm, fingers, head, or lower leg of the user.

Step S24, respectively determining the collection frequency of the image collection and the force touch collection, and determining a target time sequence according to the determined collection frequency;

the target time sequence comprises the same acquisition time points between the acquired image and the force touch acquisition signal, and in the step, the common frequency of the acquisition frequency between the image acquisition and the force touch acquisition is determined, the common acquisition time point between the image acquisition and the force touch acquisition is determined based on the determined common frequency, and the target time sequence is constructed based on the determined common acquisition time point.

Step S25, respectively performing feature screening on the skeleton position features and the action force features according to the target time sequence, and respectively sorting the skeleton position features and the action force features after feature screening according to the acquisition time points to obtain the skeleton time sequence features and the action force time sequence features;

the method comprises the steps of obtaining a skeleton position characteristic and an action strength characteristic of a target time sequence, respectively extracting the characteristics of corresponding acquisition points in the skeleton position characteristic and the action strength characteristic according to public acquisition time points in the target time sequence so as to achieve a characteristic screening effect on the skeleton position characteristic and the action strength characteristic, and respectively sequencing the skeleton position characteristic and the action strength characteristic according to the public acquisition time points corresponding to the skeleton position characteristic and the action strength characteristic after the characteristics are screened so as to obtain the skeleton time sequence characteristic and the action strength time sequence characteristic.

Optionally, in this step, the determining, according to the skeleton timing feature, the somatosensory gesture of the user includes:

determining the movement track of each skeleton key point according to the skeleton time sequence characteristics, and combining the movement tracks of different skeleton key points to obtain a combined track;

matching the combined track with a pre-stored somatosensory gesture lookup table to obtain the somatosensory gesture corresponding to the combined track;

in this step, if the combination trajectory is not matched with the pre-stored somatosensory posture lookup table, it is determined that the combination trajectory is not a trajectory corresponding to the somatosensory posture.

In the embodiment, the collected image is obtained by collecting the image of the user in the VR interactive scene, the image characteristics corresponding to the user can be effectively extracted based on the collected image, by carrying out attitude estimation on the image characteristics, the position coordinates corresponding to different skeleton key points on the user can be effectively determined, the force and touch acquisition signals are obtained by carrying out force and touch acquisition on the user in the VR interaction scene, the action force value of the corresponding operation of the user can be effectively determined based on the force and touch acquisition signals, by respectively determining the acquisition frequency of image acquisition and force touch acquisition, the target time sequence can be effectively constructed based on the determined acquisition frequency, and the skeleton position characteristic and the action force characteristic can be effectively subjected to characteristic screening based on the constructed target time sequence, so that the consistency of the time sequence between the skeleton position characteristic and the action force characteristic is improved.

EXAMPLE III

Please refer to fig. 3, which is a schematic structural diagram of a VR interaction system 100 according to a third embodiment of the present invention, including: image projection module 10, characteristic acquisition module 11, body feel posture determination module 12 and VR interactive module 13, wherein:

the image projection module 10 is configured to determine a scene to be interacted according to a VR interaction instruction sent by a user, and perform three-dimensional image projection on the scene to be interacted to obtain a VR interaction scene.

The feature acquisition module 11 is used for right in the VR interaction scene the user carries out action feature acquisition, obtains action acquisition feature, and right action acquisition feature carries out chronogenesis synchronous processing, obtains action chronogenesis feature, action acquisition feature includes skeleton position characteristic and action dynamics characteristic, action chronogenesis feature includes skeleton chronogenesis feature and dynamics chronogenesis feature.

Wherein, the feature acquisition module 11 is further configured to: acquiring images of the user in the VR interactive scene to obtain acquired images, and extracting features of the acquired images to obtain image features;

performing attitude estimation according to the image characteristics to obtain the skeleton position characteristics, wherein the skeleton position characteristics comprise corresponding relations between different skeleton key points and corresponding position coordinates;

and acquiring force and touch sense of the user in the VR interaction scene to obtain a force and touch sense acquisition signal, determining an action force value of the user according to the force and touch sense acquisition signal to obtain an action force characteristic, wherein the action force characteristic comprises a corresponding relation between different action force acquisition points and corresponding action force values on the user.

Optionally, the feature acquisition module 11 is further configured to: respectively determining the acquisition frequency of the image acquisition and the force touch acquisition, and determining a target time sequence according to the determined acquisition frequency, wherein the target time sequence comprises the same acquisition time point between the acquired image and the force touch acquisition signal;

according to the target time sequence, feature screening is conducted on the skeleton position features and the action force features respectively, and according to the collection time points, the skeleton position features and the action force features after feature screening are sorted respectively to obtain the skeleton time sequence features and the force time sequence features.

Further, the feature acquisition module 11 is further configured to: performing convolution processing on the acquired image according to a preset corrosion operator, and determining a mapping region corresponding to the corrosion operator in the acquired image after the convolution processing;

and acquiring the minimum value of the pixel points in the mapping region, and replacing the specified pixel points in the acquired image by the minimum value of the pixel points.

And the body sensing posture determining module 12 is configured to determine a body sensing posture of the user according to the skeleton time sequence feature, and determine a posture strength corresponding to the body sensing posture according to the strength time sequence feature.

Wherein the somatosensory gesture determination module 12 is further configured to: determining the movement track of each skeleton key point according to the skeleton time sequence characteristics, and combining the movement tracks of different skeleton key points to obtain a combined track;

and matching the combined track with a pre-stored somatosensory posture query table to obtain the somatosensory posture corresponding to the combined track, wherein the somatosensory posture query table stores the corresponding relation between different combined tracks and somatosensory postures.

And the VR interaction module 13 is used for generating VR interaction information according to the somatosensory posture and the posture strength and executing VR interaction operation on the VR interaction scene according to the VR interaction information.

Wherein, this VR interaction module 13 is further configured to: determining the gesture duration of the somatosensory gesture, matching the gesture duration of the somatosensory gesture, the gesture identification and the gesture strength with a pre-stored interaction information query table to obtain the VR interaction information, wherein the interaction information query table stores corresponding relations among different gesture durations, different gesture identifications, different gesture strengths and corresponding VR interaction information.

Optionally, the VR interaction module 13 is further configured to: acquiring a scene identifier of the VR interactive scene, and determining scene gradient information according to the scene identifier and the VR interactive information, wherein the scene gradient information comprises a VR gradient image and an interactive response corresponding to the scene identifier and the VR interactive information;

and performing image gradient operation on the VR interactive scene according to the VR gradient image in the scene gradient information, and performing information response on the user according to the interactive response in the scene gradient information.

In this embodiment, a to-be-interacted scene is determined through a VR interaction instruction sent by a user, the VR interaction scene can be effectively generated by projection based on the determined to-be-interacted scene, a user in the VR interaction scene can be effectively acquired by collecting motion characteristics, a skeleton position characteristic and a motion force characteristic of the user can be effectively collected, a posture of the user can be effectively represented based on the collected skeleton position characteristic, a force magnitude of the posture of the user can be effectively represented based on the collected motion force characteristic, the skeleton position characteristic and the motion force characteristic can be effectively adjusted to the same collection frequency by performing time sequence synchronization processing on the motion collection characteristic, time sequence consistency between the skeleton time sequence characteristic and the force time sequence characteristic is improved, a somatosensory posture of the user can be effectively determined based on the skeleton time sequence characteristic, and a posture force corresponding to each somatosensory posture can be effectively determined based on the force time sequence characteristic, according to the body feeling posture and the posture strength, VR interaction information corresponding to the user posture can be effectively generated, VR interaction operation is performed on a VR interaction scene through the VR interaction information, so that the user can directly perform VR interaction based on the operation posture, VR interaction is performed without adopting an operation handle, and the use experience of the user is improved.

Example four

Fig. 4 is a block diagram of a mobile terminal 2 according to a fourth embodiment of the present application. As shown in fig. 4, the mobile terminal 2 of the embodiment includes: a processor 20, a memory 21 and a computer program 22, such as a program of the VR interaction method, stored in said memory 21 and executable on said processor 20. The processor 20, when executing the computer program 23, implements:

determining a scene to be interacted according to a VR interaction instruction sent by a user, and projecting a three-dimensional image of the scene to be interacted to obtain a VR interaction scene;

acquiring action characteristics of the user in the VR interaction scene to obtain action acquisition characteristics, and performing time sequence synchronization processing on the action acquisition characteristics to obtain action time sequence characteristics, wherein the action acquisition characteristics comprise skeleton position characteristics and action force characteristics, and the action time sequence characteristics comprise skeleton time sequence characteristics and force time sequence characteristics;

and determining the somatosensory posture of the user according to the skeleton time sequence characteristics, and determining the posture strength corresponding to the somatosensory posture according to the strength time sequence characteristics.

Such as S10-S40 shown in fig. 1, or S21-S25 shown in fig. 2. Alternatively, when the processor 20 executes the computer program 22, the functions of the units in the embodiment corresponding to fig. 3, for example, the functions of the units 10 to 13 shown in fig. 3, are implemented, for which reference is specifically made to the relevant description in the embodiment corresponding to fig. 3, which is not repeated herein.

Illustratively, the computer program 22 may be divided into one or more units, which are stored in the memory 21 and executed by the processor 20 to accomplish the present application. The one or more elements may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 22 in the mobile terminal 2. For example, the computer program 22 may be divided into an image projection module 10, a feature acquisition module 11, a somatosensory posture determination module 12 and a VR interaction module 13, each of which functions as described above.

The mobile terminal may include, but is not limited to, a processor 20, a memory 21. Those skilled in the art will appreciate that fig. 4 is merely an example of a mobile terminal 2 and does not constitute a limitation of the mobile terminal 2 and may include more or fewer components than shown, or some of the components may be combined, or different components, e.g., the mobile terminal may also include input-output devices, network access devices, buses, etc.

The Processor 20 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable gate array (FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.

The memory 21 may be an internal storage unit of the mobile terminal 2, such as a hard disk or a memory of the mobile terminal 2. The memory 21 may also be an external storage device of the mobile terminal 2, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) and the like provided on the mobile terminal 2. Further, the memory 21 may also include both an internal storage unit and an external storage device of the mobile terminal 2. The memory 21 is used for storing the computer program and other programs and data required by the mobile terminal. The memory 21 may also be used to temporarily store data that has been output or is to be output.

In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.

The integrated module, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a computer readable storage medium. The computer readable storage medium may be non-volatile or volatile. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable storage medium may include: any entity or device capable of carrying computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), random-access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution media, and the like. It should be noted that the computer readable storage medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable storage media that does not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.

It is another object of an embodiment of the present invention to provide a computer readable storage medium for VR interaction, having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the following steps: determining a scene to be interacted according to a VR interaction instruction sent by a user, and projecting a three-dimensional image of the scene to be interacted to obtain a VR interaction scene;

acquiring action characteristics of the user in the VR interaction scene to obtain action acquisition characteristics, and performing time sequence synchronization processing on the action acquisition characteristics to obtain action time sequence characteristics, wherein the action acquisition characteristics comprise skeleton position characteristics and action force characteristics, and the action time sequence characteristics comprise skeleton time sequence characteristics and force time sequence characteristics;

and determining the somatosensory posture of the user according to the skeleton time sequence characteristics, and determining the posture strength corresponding to the somatosensory posture according to the strength time sequence characteristics.

The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:一种计算机视觉识别装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类