Reality fusion interaction system and method

文档序号:9471 发布日期:2021-09-17 浏览:83次 中文

1. A converged reality interaction system, comprising:

the naked eye 3D display equipment is used for acquiring and displaying the fusion reality image;

the interactive equipment is used for acquiring an entity image of an entity object and/or interactive input data of the entity object;

and the processing equipment is used for respectively acquiring a virtual image, an entity image and/or interactive input data, mapping the adjusted virtual image and the adjusted entity image into a fusion coordinate system according to the interactive input data to form a fusion reality image, and providing the fusion reality image for the naked eye 3D display equipment.

2. The system according to claim 1, wherein the naked-eye 3D display device is further configured to render and display the acquired fused reality image according to a rendering control instruction of the processing device.

3. The system of claim 1, wherein the interactive device comprises a non-contact acquisition device; the non-contact acquisition equipment comprises a camera and at least one of the following components: infrared sensors, microphones, ultrasonic sensors, and 3D sensors.

4. The system of claim 1, wherein the interactive input data comprises at least one of: the system comprises a real object, a mobile terminal and a display device, wherein the real object comprises an overall moving track of the real object, a moving track of a set part of the real object, a gesture, a posture and an expression of the real object, an audio frequency of the real object and/or an input event generated by the real object operating an input device.

5. The system of claim 1, wherein the interactive device further comprises:

the human body tracking sensor is used for tracking the position of a human body in the viewing range of the naked eye 3D display equipment;

correspondingly, the processing device interacts with the human body tracking sensor and is further configured to adjust the fused reality image according to a human body tracking position, so that the fused reality image presents a 3D display effect at the human body tracking position.

6. The system according to claim 1, wherein the number of naked eye 3D display devices is one or more; the processing equipment is respectively interacted with each naked eye 3D display equipment, is also used for determining the same or different split screen real images according to the fused real images, and is respectively provided for each naked eye 3D display equipment to display.

7. The system according to claim 1 or 2, wherein the processing device comprises:

the virtual image conversion unit is used for acquiring a virtual image and converting the virtual image from a virtual coordinate system to a fusion coordinate system according to a virtual fusion conversion relation;

the entity image conversion unit is used for acquiring an entity image and converting the entity image from an entity coordinate system to a fusion coordinate system according to an entity fusion conversion relation;

the image fusion unit is used for superposing the converted virtual image and the entity image under the fusion coordinate system to form the fusion reality image;

and the interaction adjusting unit is used for determining an adjusting strategy of the virtual image according to the interaction input data acquired by the interaction equipment and a preset interaction strategy, adjusting the virtual image according to the adjusting strategy and providing the adjusted virtual image for the virtual image converting unit.

8. The system of claim 1, wherein the processing device further comprises:

the remote receiving unit is used for receiving the remotely transmitted interactive input data or the fused reality image;

and the remote pushing unit is used for providing the interactive input data acquired by the interactive equipment or the locally generated fused reality image to processing equipment of another fused reality interactive system based on a remote transmission mode.

9. A fused reality interaction method, performed by a processing device of the fused reality interaction system of any one of claims 1-8, the method comprising:

respectively acquiring a virtual image, an entity image and interactive input data, and mapping the adjusted virtual image and entity image into a fusion coordinate system according to the interactive input data to form a fusion reality image;

and providing the fused reality image to the naked eye 3D display equipment.

10. The method of claim 9, further comprising:

and generating a rendering control instruction according to the fused reality image, and sending the rendering control instruction to the naked eye 3D display equipment so as to control the naked eye 3D display equipment to render and display the obtained fused reality image.

11. The method of claim 9, wherein the obtaining a virtual image, a physical image, and interactive input data, respectively, and mapping the adjusted virtual image and physical image into a fused coordinate system according to the interactive input data to form the fused reality image comprises:

acquiring a virtual image, determining an adjustment strategy of the virtual image according to interactive input data acquired by interactive equipment and a preset interactive strategy, and adjusting the virtual image according to the adjustment strategy;

converting the adjusted virtual image from a virtual coordinate system to a fusion coordinate system according to a virtual fusion conversion relation;

acquiring an entity image, and converting the entity image from an entity coordinate system to a fusion coordinate system according to an entity fusion conversion relation;

and superposing the converted virtual image and the entity image under the fusion coordinate system to form the fusion reality image.

12. The method of claim 9, wherein before providing the fused reality image to the naked-eye 3D display device, further comprising:

adjusting the fused reality image according to the acquired human body tracking position so that the fused reality image presents a 3D display effect at the human body tracking position; or

And generating a plurality of positions of fused reality images according to the fused reality images so as to respectively present a 3D display effect at a plurality of positions in the viewing range of the naked eye 3D display equipment.

13. The method of claim 9, further comprising:

receiving remote transmission interactive input data or fused reality images; or

And providing the interactive input data acquired by the interactive equipment or the locally generated fused reality image to processing equipment of another fused reality interactive system based on a remote transmission mode.

Background

With the iterative development of human science and technology, virtual reality technology has deepened more and more into the lives of people. Nowadays, people desire to build an information loop capable of interactive feedback between users in the real world and the virtual world so as to enhance the more real experience of the users.

In existing virtual reality, augmented reality, and reality-fused display interaction systems, a user is required to use wearable equipment to experience a virtual world, or interact with a "fantasy-interleaved" world.

However, in the process of the user performing the experience, the wearable device needs to be in contact with the human body. The user experience activity and field of view may be limited and may not be able to directly view real-world objects.

Disclosure of Invention

The embodiment of the invention provides a system and a method for integrating reality interaction, which are used for overcoming the defects in the prior art, so that the purposes that the activity and the visual field of a user are not limited in the experience process and the user can directly view a real target are achieved.

In a first aspect, an embodiment of the present invention provides a system for integrating reality interaction, including:

the naked eye 3D display equipment is used for acquiring and displaying the fusion reality image;

the interactive equipment is used for acquiring an entity image of an entity object and/or interactive input data of the entity object;

and the processing equipment is used for respectively acquiring a virtual image, an entity image and/or interactive input data, mapping the adjusted virtual image and the adjusted entity image into a fusion coordinate system according to the interactive input data to form a fusion reality image, and providing the fusion reality image for the naked eye 3D display equipment.

In a second aspect, an embodiment of the present invention further provides a method for fusing reality interaction, including:

respectively acquiring a virtual image, an entity image and interactive input data, and mapping the adjusted virtual image and entity image into a fusion coordinate system according to the interactive input data to form a fusion reality image;

and providing the fused reality image to the naked eye 3D display equipment.

According to the embodiment of the invention, the interaction system for displaying the fused reality image by the naked eye 3D display equipment is adopted, so that the problem that the activity and the visual field are limited due to the fact that a user needs to wear the equipment in the experience process is solved, and the effect that the user can directly view the reality target in the experience process is realized.

Drawings

Fig. 1 is a schematic structural diagram of a system for integrating reality interaction according to a first embodiment of the present invention;

fig. 2 is a schematic structural diagram of a system for integrating reality interaction according to a second embodiment of the present invention;

fig. 3A is a flowchart of a method for integrating reality interaction according to a third embodiment of the present invention;

fig. 3B is a schematic diagram of a method for integrating reality interaction according to a third embodiment of the present invention;

fig. 3C is a schematic diagram of a reality interaction method based on multi-person remote interaction in the third embodiment of the invention.

Detailed Description

The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.

Example one

Fig. 1 is a schematic structural diagram of a system for integrating reality interaction according to an embodiment of the present invention. As shown in fig. 1, the system for fusing reality interaction includes: a naked eye 3D display device 110, an interactive device 120 and a processing device 130.

And the naked eye 3D display device 110 is used for acquiring and displaying the fused reality image.

The interactive device 120 is configured to acquire a physical image of a physical object and/or interactive input data of the physical object.

The processing device 130 is configured to obtain a virtual image, an entity image, and/or interactive input data, and map the adjusted virtual image and entity image into a fusion coordinate system according to the interactive input data to form a fusion reality image, and provide the fusion reality image to the naked eye 3D display device.

The entity object may be the user, the limbs of the user, or any real-world object, and may be acquired by an acquisition device such as a camera in the interactive device 120. The virtual image may be model image data acquired from any software platform or software engine. The fused reality image can be a new world model image generated by combining a reality world model image acquired through interactive equipment and an existing virtual world model image according to a mapping relation.

The naked eye 3D display device 110 is a display that projects parallax views to left and right eyes respectively to form a stereoscopic effect through the light splitting characteristic of the display screen. The naked eye 3D display equipment comprises a single screen and multiple screens, and the number of the naked eye 3D display equipment can be one or more. The light splitting element includes, but is not limited to, a cylindrical lens, a grating, a directional point light source, and the like. The naked-eye 3D display device 110 can obtain a realistic stereoscopic image with space and depth without any auxiliary devices (e.g., 3D glasses, helmet, etc.).

Optionally, the naked eye 3D display device is further configured to render and display the acquired fusion reality image according to a rendering control instruction of the processing device. The rendering control instruction may be, for example, completing 3D content generation, interleaving processing, etc. using hardware embedded in a naked eye 3D display.

Optionally, the interactive device 120 may be a contact type acquisition device and a non-contact type acquisition device. For example, it is preferable that the interactive device 120 is a non-contact type acquisition device, and the non-contact type acquisition device may be a camera, an infrared sensor, a microphone, an ultrasonic sensor, a 3D sensor, and the like.

The interactive device 120 may be configured to acquire an entity image of an entity object, where the entity image may be acquired by an image acquisition device in the interactive device 120, and the image acquisition device may be a camera, a 3D laser scanner, or a 3D visual image sensor, which is not limited in this embodiment of the present invention. The interactive device 120 is further configured to collect interactive input data of the entity object.

Optionally, the interactive input data includes at least one of: the system comprises a real object, a mobile terminal and a display device, wherein the real object comprises an overall moving track of the real object, a moving track of a set part of the real object, a gesture, a posture and an expression of the real object, an audio frequency of the real object, an input event generated by the real object operating an input device and the like. The movement trajectory of the set portion is, for example, an eyeball movement trajectory or an arm movement trajectory.

The whole moving track of the actual object may be obtained through a human tracking sensor in the interactive device 120, the human tracking sensor may track a position of a human body within a viewing range of the naked-eye 3D display device 110, and the human tracking sensor may be a human infrared sensor or other sensors capable of tracking a human body. The eye movement trajectory of the actual object can be obtained by a device capable of tracking human eyes, such as an intelligent visual tracker. The movement trajectory of the set part of the actual object may be, for example, a gesture, a posture, and an expression of the actual object, and may be acquired by a device such as a posture sensor. The audio of the actual object may be obtained through a voice recognition device such as a microphone, and the operation input event of the actual object may be obtained through an input device, for example, the input device may be a handle, a bracelet, a glove, and the like, which is not limited in this embodiment of the present invention.

Specifically, if the real-time performance of the interactive device 120 cannot meet the requirement of acquiring the 3D model of the image of the actual object, such as the position movement of the actual object, the set part movement of the actual object, the gesture, the expression and the like, the image acquisition of the actual object and the like may be entered into the system in advance, and only the position of the actual object and the like may be tracked in real time.

The processing device 130 may be a client host, and is connected to the interactive device 120 and the naked-eye 3D display device 110 through a transmission medium, where the transmission mode may be wired communication or wireless communication. The processing device 130 may receive the interactive input data of the entity object collected by the interactive device 120, and process the interactive input data.

Optionally, the processing device 130 is further configured to adjust the fused reality image according to the interactive input data. The processing device 120 may process the real position coordinates according to the overall movement trajectory of the actual object, the movement trajectory of the set portion of the actual object, and the gesture, the posture, the expression, and the like of the actual object in the interactive input data, and adjust the coordinates of the fused reality to generate a new fused reality image.

Optionally, the processing device 130 may interact with the human body tracking sensor, and is configured to adjust the fused reality image according to a human body tracking position, so that the fused reality image presents a 3D display effect at the human body tracking position.

The processing device 130 may further adjust the corresponding view fused with the real world according to an eyeball movement trajectory of the actual object in the interactive input data, and output the corresponding view through the naked-eye 3D display device 110. The corresponding view can be one or more views, and the multiple views are multiple views watched by multiple persons, namely at least three or more views are displayed, and the display positions are fixed.

The processing device 130 may be connected to the naked-eye 3D display device 110, and output a corresponding view fused with the real world, where one or more of the naked-eye 3D display devices 110 may be used, and the number of screens may be increased according to a requirement, so as to enhance a display effect or increase display contents.

Optionally, the processing device 130 interacts with each naked-eye 3D display device 110, determines the same or different split-screen real images according to the fused real image, and provides the split-screen real images for each naked-eye 3D display device 110 to display.

According to the technical scheme, the interaction system for displaying the fused reality image by the naked eye 3D display device is adopted, the problem that the activity and the visual field are limited due to the fact that the user needs to wear the device in the experience process is solved, and the effect that the user can directly view the reality target in the experience process is achieved.

Example two

Fig. 2 is a schematic structural diagram of a reality-fused interactive system according to a second embodiment of the present invention, which is further detailed based on the second embodiment. As shown in fig. 2, the system for fusing reality interaction includes: a naked eye 3D display device 110, an interactive device 120 and a processing device 130.

Optionally, the processing device 130 includes:

a virtual image conversion unit 131, configured to obtain a virtual image, and convert the virtual image from a virtual coordinate system to a fusion coordinate system according to a virtual fusion conversion relationship;

the entity image conversion unit 132 is configured to obtain an entity image, and convert the entity image from an entity coordinate system to a fusion coordinate system according to an entity fusion conversion relationship;

and an image fusion unit 133, configured to superimpose the converted virtual image and the entity image in the fusion coordinate system to form the fused reality image.

And the interaction adjusting unit 134 is configured to determine an adjusting strategy of the virtual image according to interaction input data acquired by the interaction device and a preset interaction strategy, adjust the virtual image according to the adjusting strategy, and provide the adjusted virtual image to the virtual image converting unit.

The processing device 130 may further include:

a remote receiving unit 135 for receiving the remotely transmitted interactive input data or the fused reality image;

and the remote pushing unit 136 is configured to provide the interaction input data acquired by the interaction device or the locally generated fused reality image to a processing device of another fused reality interaction system based on a remote transmission mode.

The virtual image is an image of a 3D model of the virtual world, and the virtual coordinate system is a coordinate system where the 3D model is located in the virtual world. The virtual image conversion unit 131 is configured to establish a mapping relationship between the virtual world coordinate system and the fused world coordinate system, and convert the virtual coordinate system into the fused coordinate system according to the mapping relationship.

The physical image is an image of a real-world 3D model obtained through the interactive device 120, and the physical coordinate system is a coordinate system where an actual target object in the real world is located, usually with a display device or an artificial origin. The physical image conversion unit 132 is configured to establish a mapping relationship between the real world coordinate system and the fused world coordinate system, and convert the physical coordinate system into the fused coordinate system according to the mapping relationship.

The fusion coordinate system may be a world coordinate system in which a physical coordinate system of the real-world target object is located and a virtual coordinate system of the 3D model in the virtual world are fused, and a fusion world coordinate system. The fused world coordinate system is constantly changed by the change of the position of the target object acquired by the interactive apparatus 120. The fusion reality image continuously changes in real time due to changes of the target object image acquired by the interactive device 120, and is finally displayed by the naked-eye 3D display device 110.

The interaction strategy may be that script encapsulation is performed on a virtual object model in the virtual world through any software platform or software engine, corresponding functions are given to the virtual object model according to specific logic, and corresponding action instructions are made according to interaction input data acquired through the interaction device 120. The adjustment strategy is that the virtual image performs corresponding dynamic adjustment on the changes of the position, the size, the direction and the like of the image according to the interaction strategy.

In a specific example, first, the interactive device 120 collects the interactive input data of the entity object in real time, including the movement track of the actual object, the gesture, the posture, the expression change, and the like of the actual object. The interaction adjusting unit 134 performs corresponding dynamic adjustment on the virtual image according to data collected by the interaction device 120, such as a gesture, a posture and an expression change, and a preset interaction policy, such as a series of action instructions preset by the virtual world object in advance for the gesture, the posture and the expression change, including dynamic changes of the position, the size, the direction and the like of the target image. And finally, providing the adjusted virtual image to the virtual image conversion unit.

The remote transmission mode may be any wireless transmission mode or wired transmission mode, and may be any transmission mode such as optical fiber transmission, twisted pair transmission, microwave transmission, GPRS wireless data, and the like, which is not limited in this embodiment of the present invention.

In a specific example, the remote pushing unit 136 records the real object 3D model of the real local object into the system after being collected by the interactive device 120, where the recording mode may be a scanning mode, a shooting mode, or the like. After the input is finished, the input data is pushed to the remote receiving unit 135 and projected to the fusion world to display the projected image through the naked eye 3D display device 110. Meanwhile, the remote receiving unit 136 or the remote pushing monocular 135 may perform an interactive operation through the interactive apparatus 120. If the real-time performance of the interactive device 120 cannot meet the requirement of the 3D model acquisition of the local object, the 3D model acquisition of the local object can be recorded into the system in advance, and then the position of the local object is tracked in real time during remote interaction. In the remote interaction scheme, the problem of coordinate system transformation at two remote ends exists, specifically, the coordinate systems of two end screens are fused into the same coordinate system according to a predetermined relative relationship.

The technical scheme of the embodiment can be applied to a multi-view scene watched by multiple people, and can realize multi-person remote interaction. The problem that the remote end can only watch but not interact is solved, and convenience is provided for users who want to carry out remote interaction.

EXAMPLE III

Fig. 3A is a flowchart of a method for fusing reality interaction according to a third embodiment of the present invention. The embodiment is applicable to a scene where interaction is performed in a virtual reality, augmented reality or fused reality system, and the method may be performed by the fused reality interaction system in the embodiment of the present invention, and the system may be implemented by software and/or hardware. As shown in fig. 3A, the method specifically includes the following steps:

and 310, respectively acquiring a virtual image, an entity image and interactive input data, and mapping the adjusted virtual image and entity image to a fusion coordinate system according to the interactive input data to form the fusion reality image.

The entity image can be obtained through interactive equipment, the virtual image can be obtained through image data of any software platform or software engine model, and the fused reality image is processed by processing equipment.

Optionally, a virtual image is obtained, an adjustment strategy of the virtual image is determined according to interaction input data collected by the interaction device and a preset interaction strategy, the virtual image is adjusted according to the adjustment strategy, and the adjusted virtual image is converted from a virtual coordinate system to a fusion coordinate system according to a virtual fusion conversion relation. And acquiring an entity image, and converting the entity image from an entity coordinate system to a fusion coordinate system according to an entity fusion conversion relation. And superposing the converted virtual image and the entity image under the fusion coordinate system to form the fusion reality image.

Specifically, a virtual world coordinate system, namely a virtual coordinate system, is obtained through the processing equipment, and a mapping relation between the virtual coordinate system and the fusion coordinate system is determined to be MvsAnd the fusion coordinate system is a coordinate system in which the real-world physical target object and the virtual world 3D model are fused in the same world. The interactive equipment acquires a real world coordinate system, namely a physical coordinate system, and determines that the mapping relation between the physical coordinate system and the fusion coordinate system is Mrs. The M isvsIs a mapping matrix of the virtual coordinate system to the fusion coordinate system. The M isrsAnd the mapping matrix from the physical coordinate system to the fusion coordinate system. The processing system obtains the 3D model of the required virtual world and converts all models of the virtual world into a fusion coordinate system according to the mapping relation. I.e. for any point P in a given virtual worldVCalculating the corresponding point P of the fusion world by using the following formulaSV

Psv[Xsv,Ysv,Zsv]=[Mvs]×Pv[Xv,Yv,Zv]

Wherein, Xsv,Ysv,ZsvThe vectors corresponding to the fused world coordinates in the direction of the three-dimensional space X, Y, Z are respectively. Xv,Yv,ZvThe other vectors are vectors corresponding to the virtual world coordinates in the direction of the three-dimensional space X, Y, Z.

And the processing system obtains the 3D model of the corresponding entity target in the real world from the interactive equipment and converts all the models into a fusion coordinate system according to the mapping relation. I.e. to any point P in the real worldrThere is a corresponding point P of the fusion world calculated by using the following formulasr

Psr[Xsr,Ysr,Zsr]=[Mrs]×Pr[Xr,Yr,Zv]

Wherein, Xsr,Ysr,ZsrThe vectors corresponding to the fused world coordinates in the direction of the three-dimensional space X, Y, Z are respectively. Xr,Yr,ZvEach vector corresponding to real world coordinates in the direction of three-dimensional space X, Y, Z.

The interactive input data at least includes a whole movement track of an actual object, an eyeball movement track of the actual object, a movement track of a set part of the actual object, a gesture, a posture, an expression of the actual object, an audio frequency of the actual object, and the like, and the device for acquiring the interactive input data may be a contact acquisition device or a non-contact acquisition device, for example, preferably, a non-contact acquisition device may be adopted. Can gather actual object audio frequency through the microphone, the image data of actual object is acquireed to the 3D vision sensor, and human infrared sensor acquires actual object moving trajectory, and gesture sensor acquires actual object gesture, expression etc. and this embodiment does not restrict the device that obtains interactive input data and use. And the processing equipment acquires the interactive input data acquired by the interactive equipment, adjusts the data and generates the fused reality image.

Optionally, obtaining interactive input data acquired by an interactive device, and adjusting the fused reality image according to the interactive input data includes: determining an adjustment strategy of the virtual image according to interactive input data acquired by the interactive equipment and a preset interactive strategy, and adjusting the virtual image according to the adjustment strategy.

The interaction strategy may be an action of the real-world real object interacting with the virtual-world virtual object, for example, a hand of a person touching the virtual object. The adjustment policy is a series of behavior actions made by the virtual object in the virtual world according to the interaction policy, for example, corresponding behavior actions are made aiming at changes of human gestures, postures and expressions, such as a series of behavior actions of position changes, object zooming in or out, direction rotation and the like. And the processing equipment adjusts the virtual object image of the virtual world according to the adjustment strategy, and applies the virtual object image to the fusion world to change the target image change corresponding to the fusion world.

And step 320, providing the fused reality image to the naked eye 3D display equipment.

The naked eye 3D display device can be one or more, the number of screens can be increased according to requirements, and the display effect is enhanced or the display content is increased.

Optionally, before providing the fused reality image to the naked-eye 3D display device, the method further includes: adjusting the fused reality image according to the acquired human body tracking position so that the fused reality image presents a 3D display effect at the human body tracking position; or generating a plurality of positions of fused reality images according to the fused reality images so as to respectively present a 3D display effect at a plurality of positions in the viewing range of the naked eye 3D display equipment.

Optionally, a rendering control instruction is generated according to the fused reality image and sent to the naked eye 3D display device to control the naked eye 3D display device to render and display the obtained fused reality image.

The human body tracking position can be obtained through interactive equipment, a depth sensor or human eye tracking equipment can be adopted to obtain three-dimensional space information, and the three-dimensional space information can comprise position information of a viewer, position information of a naked eye 3D display screen and relative position information of the viewer and the display screen. And the processing equipment is used for processing the information acquired by the interactive equipment, analyzing the positions of the two eyes of the viewer, adjusting the left eye image and the right eye image according to the change of the positions of the two eyes when the viewer moves in the three-dimensional space, interweaving the left eye image and the right eye image into an output image according to the requirement of the naked eye 3D display equipment, and finally outputting the output image to the naked eye 3D display equipment for image display.

Specifically, the processing device may interleave the output image to the naked-eye 3D display as required, where the light splitting element of the naked-eye 3D display device may be a lens, and the lens may be regarded as being composed of prisms with similar shapes. The optical refraction type light splitting function of the prism is used as a 3D display grating, and the left and right eye visual difference images on the screen can be separated. The specific implementation mode can be, for example, a Prism Mask structure 3D display technology or a Lucius Prism array 3D display technology.

In this implementation, the processing device may adaptively adjust the left-eye image and the right-eye image according to the positions of the two eyes of the viewer, so that the fused 3D image position may change with the change of the position and the angle of the viewer. As the position of the observer moves, the content of the corresponding view displayed by the naked eye 3D display device will change adaptively to correspond to the image of the angle of the object that the observer should see at the current position. Another alternative implementation is: with the movement of the position of the observer, the content of the corresponding view displayed by the naked eye 3D display device does not change, and the optimal viewing position is adjusted to follow the latest movement position of the observer only by the position translation of the interweaved output image.

As shown in fig. 3B, in a specific example, the interaction device acquires a 3D model of a real object in real time, and when a user interacts with a virtual world object, the interaction device acquires interaction input data generated by an interaction event, and the processing device adjusts a displayed virtual image according to an interaction policy and an adjustment policy, including displaying dynamic changes of the position, size, and the like of the virtual image. The fusion coordinate system continuously changes along with the changes of the physical coordinate system and the virtual coordinate system, the processing equipment converts the virtual coordinate system and the physical coordinate system into the fusion coordinate system, and the fusion image is output to the naked eye 3D display screen according to the corresponding relation of the fusion coordinate system and the fusion image.

In a multi-view scene watched by multiple people, the fusion reality images generate fusion reality images at multiple positions, and the multiple positions in the watching range of the naked eye 3D display equipment respectively show a 3D display effect. In this scenario, the processing device processes the coordinate transformation and the interaction event in the same way as the above-described scheme, except that the processing device projects all the objects in the fusion world into corresponding views and performs the interleaved output. Specifically, the interleaving output can be performed in a multi-view mode by means of unmanned eye tracking.

Optionally, receiving remote transmitted interactive input data or fusing a real image; or providing the interactive input data acquired by the interactive equipment or the locally generated fused reality image to processing equipment of another fused reality interactive system based on a remote transmission mode.

In a specific example, as shown in fig. 3C, a presenter and a viewer interact remotely through the system, the presenter collects and inputs a 3D model of a local object in reality into the system through an interactive device and pushes the model into the viewer's system, and the model is projected into the fusion world of the viewer and can be operated by the presenter or the viewer through the interactive device. If the real-time performance of the interactive equipment cannot meet the requirement of 3D model acquisition of the local object, the 3D model acquisition of the local object can be recorded into the system in advance, and then the position of the local object can be tracked in real time during remote interaction. In the remote interaction scheme, the problem of coordinate system transformation at two remote ends exists, specifically, the coordinate systems of two end screens are fused into the same coordinate system according to a predetermined relative relationship.

According to the technical scheme, the interaction system based on the naked eye 3D display device and fusing the reality images is adopted, the problems that the activity and the visual field are limited and the remote interaction is limited due to the fact that the user needs to wear the device in the experience process are solved, the effect that the user can directly view the reality target in the experience process and can remotely interact is achieved, and the interaction system is suitable for multi-view display scenes watched by multiple people.

It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:基于AR的场景导览方法、AR眼镜、电子装置和存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!