AR presentation method and system of shared interactive information in space based on real object
1. An AR presentation method of shared interactive information in space based on a real object is characterized by comprising the following steps:
s1, carrying out multi-azimuth scanning shooting on the real object, establishing a real object 3D model after characteristic analysis, and storing the model into a platform database;
s2, after scanning the corresponding real object through the platform, the first terminal user determines the spatial position data of the interactive information relative to the real object, the spatial position data is calculated based on a spatial simulation algorithm, then the interactive information is issued at the spatial position, and finally the spatial position data and the interactive information are stored in a platform database;
s3, after scanning the corresponding real object through a platform, automatically matching and inquiring the real object 3D model data, the corresponding spatial position data and the interaction information stored in the platform database, then rendering the interaction information to the spatial position corresponding to the real object 3D model data, integrating and generating a second 3D model taking the real object 3D model as the center, wherein the interaction information surrounds the periphery of the real object 3D model, and therefore the function of AR presentation of the shared interaction information in the space based on the real object is achieved.
2. The method for AR presentation of shared real-object-based mutual information in space according to claim 1, wherein in step S1, the multi-azimuth scanning shooting is implemented based on a 3D depth camera.
3. The method for AR presentation of shared interactive information in space based on physical objects according to claim 1, wherein in step S2, the spatial modeling algorithm specifically includes the following steps:
s21, after scanning the corresponding real object through the platform, the terminal user A prestores the image feature information of the real object, decomposes the image feature information into N feature node pictures, and takes the hash table mode [ k, value ]]Recording, namely recording the picture of each characteristic node as k and the corresponding plane coordinate position (l) of the k for each characteristic node1,l2) Marking as value; the matrix data set of the current real object picture is as follows:can obtain the current object feature pictureThe corresponding hash value is the plane coordinate position (l) of the information data presentation1,l2);
S22, after scanning the corresponding real object, calculating the size of the real object, recording as R, wherein the actual size of the real object 3D model is R, and according to a formulaObtaining a value of D, wherein D is a constant parameter for proportional calculation of the object model;
s23, finally obtaining the position coordinate (l) of the interactive information presented in the space1,l2,d)。
4. The method for AR presentation of shared real object-based interactive information in space according to claim 1, wherein in step S2, the first end user is one of the same end user or different end users, the interactive information published by the same end user or different end users is independent of each other, and the interactive information is located at the corresponding spatial position of the real object 3D model and remains unchanged.
5. The method for AR presentation of shared real-object-based interactive information in space according to claim 1, wherein in step S3, the second end user is one of the same end user or a different end user, and after forming the second 3D model, the second end user can continue to issue interactive information based on the second 3D model, thereby implementing an interactive information sharing function between multiple users.
6. An AR presentation system in space based on shared interactive information of real objects, comprising:
the object image acquisition module 101 is used for carrying out multi-directional scanning shooting on an object, establishing a 3D model of the object after characteristic analysis and storing the model into a platform database;
the interactive information uploading module 102 is configured to fix the interactive information issued by the first terminal user at a corresponding spatial position of the real object 3D model;
and the interactive information presentation module 103 is configured to integrate the real object 3D model data, the corresponding spatial position data, and the interactive information and generate a second 3D model centered on the real object 3D model, where the interactive information surrounds the real object 3D model, so that the function of AR presentation in space based on the shared interactive information of the real object is realized.
7. The object-based AR rendering system of shared interactive information in space of claim 6, wherein said object image acquisition module 101 is multi-azimuth scanned shot with a 3D depth camera.
8. The object-based AR presentation system for sharing mutual information in space according to claim 6, wherein the mutual information in the mutual information uploading module 102 is based on a space simulation algorithm to realize positioning.
9. The object-based AR presentation system for sharing interactive information in space according to claim 8, wherein the spatial simulation algorithm specifically comprises the steps of:
s21, after scanning the corresponding real object through the platform, the terminal user A prestores the image feature information of the real object, decomposes the image feature information into N feature node pictures, and takes the hash table mode [ k, value ]]Recording, namely recording the picture of each characteristic node as k and the corresponding plane coordinate position (l) of the k for each characteristic node1,l2) Marking as value; the matrix data set of the current real object picture is as follows:
can obtain the current object feature pictureThe corresponding hash value is the plane coordinate position (l) of the information data presentation1,l2);
S22, after scanning the corresponding real object, calculating the size of the real object, recording as R, wherein the actual size of the real object 3D model is R, and according to a formulaObtaining a value of D, wherein D is a constant parameter for proportional calculation of the object model;
s23, finally obtaining the position coordinate (l) of the interactive information presented in the space1,l2,d)。
10. The real object-based AR presentation system of shared interactive information in space of claim 6, wherein said first end user is one of the same end user or different end users, interactive information published by the same end user or different end users is independent of each other, and said interactive information is located at a corresponding spatial location of said real object 3D model and remains unchanged.
Background
Augmented reality, namely augmented reality, AR for short, is to enhance the user's perception of the real world by superimposing auxiliary information presented by a computer system into the real scene. A virtual object is constructed by utilizing a plurality of computer basic technologies such as a display technology, an interaction technology, a computer graphics technology and the like, and the constructed virtual object and related auxiliary information are accurately and real-timely superposed on a real scene, so that a more real visual effect with richer scene information is presented. The virtual-real combined technology is a key technology of augmented reality, and can realize the expansion of a real scene on a display device, and a user sees not a complete virtual world through the display device but a combined space for seamlessly integrating virtual world objects into a real situation.
And (4) an interactive technology, namely a technology of man-machine real-time interaction. Interactivity is important in AR research because all research is for applications that emphasize user experience where interactivity plays an extremely important role. Along with the development of the AR technology, the display equipment is more various, small and portable, and in addition, the rapid development of the intelligent mobile handheld equipment makes the importance of the man-machine interaction technology to the AR system larger and larger. The interaction in the AR system mainly emphasizes the three-dimension and real-time interaction, the three-dimension is mainly embodied in three dimensions of an interaction mode and an interaction environment, in the interaction based on the augmented reality, a virtual object and a real situation are used as two necessary elements to determine the real three-dimension, a user feels a three-dimensional world formed by overlapping the virtual object and the real situation, and the three-dimensional world with the real situation enables the user to have brand-new interaction experience. The interaction between the user and the device is also three-dimensional, such as gesture control, motion control, and the like. Real-time interactivity is mainly embodied in that after a user interacts with a device, in order to ensure a realistic user experience, input and output of the device should be timely. With the continuous and deep research of the AR technology, it is far from sufficient to realize only static fusion, and it is also necessary to realize dynamic complementation, which is the true meaning of virtual and real fusion.
Most of the existing AR interaction technologies only support a mask-based AR presentation technology, and compared with a single-click game, the existing AR interaction technologies are poor in interactive experience and weak in offline convergence. At present, when a user appreciates a three-dimensional real object, only the three-dimensional real object can be commented and exchanged singly, and different users can not exchange interactive information with the three-dimensional real object.
Disclosure of Invention
In view of the above-mentioned deficiencies in the prior art, the present invention aims to provide an AR presentation method and system for sharing interactive information in space based on real objects. The method solves the problems that at present, when a user appreciates a three-dimensional real object, only the three-dimensional real object can be commented and exchanged singly, and different users can not exchange information with the three-dimensional real object interactively, and the like.
In order to solve the problems in the prior art, the invention is realized by the following technical scheme:
an AR presentation method of shared interactive information in space based on real objects comprises the following steps:
s1, carrying out multi-azimuth scanning shooting on the real object, establishing a real object 3D model after characteristic analysis, and storing the model into a platform database;
s2, after scanning the corresponding real object through the platform, the first terminal user determines the spatial position data of the interactive information relative to the real object, the spatial position data is calculated based on a spatial simulation algorithm, then the interactive information is issued at the spatial position, and finally the spatial position data and the interactive information are stored in a platform database;
s3, after scanning the corresponding real object through a platform, automatically matching and inquiring the real object 3D model data, the corresponding spatial position data and the interaction information stored in the platform database, then rendering the interaction information to the spatial position corresponding to the real object 3D model data, integrating and generating a second 3D model taking the real object 3D model as the center, wherein the interaction information surrounds the periphery of the real object 3D model, and therefore the function of AR presentation of the shared interaction information in the space based on the real object is achieved.
Further, in step S1, the multi-azimuth scanning shooting is implemented based on a 3D depth camera.
Further, in step S2, the spatial simulation algorithm specifically includes the following steps:
s21, after scanning the corresponding real object through the platform, the terminal user A prestores the image feature information of the real object, decomposes the image feature information into N feature node pictures, and takes the hash table mode [ k, value ]]Recording, namely recording the picture of each characteristic node as k and the corresponding plane coordinate position (l) of the k for each characteristic node1,l2) Marking as value; the matrix data set of the current real object picture is as follows:can obtain the current object feature pictureThe corresponding hash value is the plane coordinate position (l) of the information data presentation1,l2);
S22, after scanning the corresponding real object, calculating the size of the real object, recording as R, wherein the actual size of the real object 3D model is R, and according to a formulaObtaining a value of D, wherein D is a constant parameter for proportional calculation of the object model;
s23, finally obtaining the position coordinate (l) of the interactive information presented in the space1,l2,d)。
Further, in step S2, the first end user is one of the same end user or a different end user, the mutual information issued by the same end user or the different end user is independent of each other, and the mutual information is located at the corresponding spatial position of the physical 3D model and remains unchanged.
Further, in step S3, the second end user is the same end user or one of different end users, and after the second 3D model is formed, the second end user may continue to issue the interaction information based on the second 3D model, thereby implementing the function of sharing the interaction information among multiple users.
Another object of the present invention is to provide an AR presentation system in space based on shared interactive information of real objects.
The AR presentation system of the real object-based shared interactive information in the space comprises:
the object image acquisition module is used for carrying out multi-azimuth scanning shooting on the object, establishing a 3D model of the object after characteristic analysis and storing the model into a platform database;
the interactive information uploading module is used for fixing the interactive information issued by the first terminal user on the corresponding spatial position of the real object 3D model;
and the interactive information presentation module is used for integrating the real object 3D model data, the corresponding spatial position data and the interactive information and generating a second 3D model which takes the real object 3D model as a center and surrounds the real object 3D model with the interactive information, so that the function of AR presentation in space based on the real object shared interactive information is realized.
Further, the object image acquisition module performs multi-azimuth scanning shooting through a 3D depth camera.
Further, the mutual information in the mutual information uploading module is based on a space simulation algorithm to realize positioning.
Further, the spatial simulation algorithm specifically includes the following steps:
s21, after scanning the corresponding real object through the platform, the terminal user A prestores the image feature information of the real object, decomposes the image feature information into N feature node pictures, and takes the hash table mode [ k, value ]]Recording, namely recording the picture of each characteristic node as k and the corresponding plane coordinate position (l) of the k for each characteristic node1,l2) Marking as value; the matrix data set of the current real object picture is as follows:can obtain the current object feature pictureThe corresponding hash value is the plane coordinate position (l) of the information data presentation1,l2);
S22, after scanning the corresponding real object, calculating the size of the real object, recording as R, wherein the actual size of the real object 3D model is R, and according to a formulaObtaining a value of D, wherein D is a constant parameter for proportional calculation of the object model;
s23, finally obtaining the position coordinate (l) of the interactive information presented in the space1,l2,d)。
Further, the first end user is one of the same end user or different end users, the mutual information issued by the same end user or different end users is independent of each other, and the mutual information is located at the corresponding spatial position of the physical 3D model and remains unchanged.
Compared with the prior art, the invention has the following advantages:
the invention provides an AR presentation method and system in space based on shared interactive information of a real object, which comprises the steps of firstly carrying out multi-directional scanning shooting on the real object, establishing a 3D model of the real object after characteristic analysis and storing the model in a platform database; then, after scanning the corresponding real object through a platform, determining spatial position data of the interactive information relative to the real object, wherein the spatial position data is obtained by calculation based on a spatial simulation algorithm, then issuing the interactive information at the spatial position, and then storing the spatial position data and the interactive information into a platform database; and finally, rendering the interactive information to a corresponding spatial position of the real object 3D model data, integrating and generating a second 3D model which takes the real object 3D model as a center, wherein the interactive information surrounds the second 3D model around the real object 3D model, and displaying the second 3D model in a superposition manner to a real scene where the real object is located, so that the function of AR (augmented reality) display of the shared interactive information based on the real object in the space is realized, and meanwhile, independent spatial positions of information transmitted among a plurality of users can be obtained, and multi-user information interaction is realized.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flowchart of an AR presentation method in space based on shared interactive information of a real object according to the present invention;
FIG. 2 is a schematic diagram of an AR presentation system in space based on shared interactive information of a real object according to the present invention.
Detailed Description
The technical solution of the present invention will be clearly and completely described below with reference to specific embodiments. It is to be understood that the described embodiments are merely a few embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without any inventive step, are within the scope of the present invention.
Example 1
Referring to fig. 1, an AR presentation method for sharing interactive information in a space based on a real object includes the following steps:
s1, carrying out multi-azimuth scanning shooting on the real object, establishing a real object 3D model after characteristic analysis, and storing the model into a platform database;
s2, after scanning the corresponding real object through the platform, the first terminal user determines the spatial position data of the interactive information relative to the real object, the spatial position data is calculated based on a spatial simulation algorithm, then the interactive information is issued at the spatial position, and finally the spatial position data and the interactive information are stored in a platform database;
s3, after scanning the corresponding real object through a platform, automatically matching and inquiring the real object 3D model data, the corresponding spatial position data and the interaction information stored in the platform database, then rendering the interaction information to the spatial position corresponding to the real object 3D model data, integrating and generating a second 3D model taking the real object 3D model as the center, wherein the interaction information surrounds the periphery of the real object 3D model, and therefore the function of AR presentation of the shared interaction information in the space based on the real object is achieved.
Further, in step S1, the multi-azimuth scanning shooting is implemented based on a 3D depth camera. In the prior art, other scanning and shooting modes are available.
Further, in step S2, the spatial simulation algorithm specifically includes the following steps:
s21, after scanning the corresponding real object through the platform, the terminal user A prestores the image feature information of the real object, decomposes the image feature information into N feature node pictures, and takes the hash table mode [ k, value ]]Recording, namely recording the picture of each characteristic node as k and the corresponding plane coordinate position (l) of the k for each characteristic node1,l2) Marking as value; the matrix data set of the current real object picture is as follows:can obtain the current object feature pictureThe corresponding hash value is the plane coordinate position (l) of the information data presentation1,l2);
S22, after scanning the corresponding real object, calculating the size of the real object, recording as R, wherein the actual size of the real object 3D model is R, and according to a formulaObtaining a value of D, wherein D is a constant parameter for proportional calculation of the object model;
s23, finally obtaining the position coordinate (l) of the interactive information presented in the space1,l2,d)。
It should be noted that, after the same terminal user or different terminal users scan the real object for multiple times, based on the spatial simulation algorithm, a spatial position relative to the real object that is independent of each other is obtained, the spatial simulation algorithm is only one of the algorithms of the present invention, and similar spatial simulation algorithms in other prior arts are all included in the protection scope of the present invention.
Further, in step S2, the first end user is one of the same end user or a different end user, the mutual information issued by the same end user or the different end user is independent of each other, and the mutual information is located at the corresponding spatial position of the physical 3D model and remains unchanged.
Further, in step S3, the second end user is the same end user or one of different end users, and after the second 3D model is formed, the second end user may continue to issue the interaction information based on the second 3D model, thereby implementing the function of sharing the interaction information among multiple users.
Example 2
Referring to fig. 2, an AR presentation system for sharing interactive information in space based on real objects is disclosed. The method comprises the following steps:
the object image acquisition module 101 is used for carrying out multi-directional scanning shooting on an object, establishing a 3D model of the object after characteristic analysis and storing the model into a platform database;
the interactive information uploading module 102 is configured to fix the interactive information issued by the first terminal user at a corresponding spatial position of the real object 3D model;
and the interactive information presentation module 103 is configured to integrate the real object 3D model data, the corresponding spatial position data, and the interactive information and generate a second 3D model centered on the real object 3D model, where the interactive information surrounds the real object 3D model, so that the function of AR presentation in space based on the shared interactive information of the real object is realized.
Further, the real object image acquiring module 101 performs multi-directional scanning shooting through a 3D depth camera.
Further, the mutual information in the mutual information uploading module 102 is based on a spatial simulation algorithm to realize positioning.
Further, the spatial simulation algorithm specifically includes the following steps:
s21, after scanning the corresponding real object through the platform, the terminal user A prestores the image feature information of the real object, decomposes the image feature information into N feature node pictures, and takes the hash table mode [ k, value ]]Recording, namely recording the picture of each characteristic node as k and the corresponding plane coordinate position (l) of the k for each characteristic node1,l2) Marking as value; the matrix data set of the current real object picture is as follows:can obtain the current object feature pictureThe corresponding hash value is the plane coordinate position (l) of the information data presentation1,l2);
S22, after scanning the corresponding real object, calculating the size of the real object, recording as R, wherein the actual size of the real object 3D model is R, and according to a formulaObtaining a value of D, wherein D is a constant parameter for proportional calculation of the object model;
s23, finally obtaining the position coordinate (l) of the interactive information presented in the space1,l2,d)。
Further, the first end user is one of the same end user or different end users, the mutual information issued by the same end user or different end users is independent of each other, and the mutual information is located at the corresponding spatial position of the physical 3D model and remains unchanged.
Although the present invention has been described in detail with reference to the foregoing embodiments, those skilled in the art will understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein.