System and method for realizing color three-dimensional point cloud naked eye display by single spatial light modulator
1. The utility model provides a single spatial light modulator realizes colored three-dimensional point cloud bore hole display system which characterized in that:
the device comprises a spatial light modulator (10), parallel light pipe lasers (1, 2 and 3), polarizing plates (4, 5, 6 and 9), wedge-shaped beam splitters (7 and 8) and a lens group (11); the three parallel light pipe lasers (1, 2 and 3) emit light beams with different colors, the light beams are respectively incident to the wedge beam splitters (7 and 8) after passing through the polarizing plates (4, 5 and 6), are incident to the spatial light modulator (10) after being converged and combined by the wedge beam splitters (7 and 8) and then passing through the polarizing plate (9), and are reflected by the spatial light modulator (10) and then are incident to human eyes through the lens group (11) for imaging.
2. The system of claim 1, wherein the single spatial light modulator is used for realizing a color stereoscopic point cloud naked eye display system, and the system comprises: the three lasers (1, 2 and 3) with the parallel light pipe are respectively a green light laser (1) with the parallel light pipe, a red laser (2) with the parallel light pipe and a blue laser (3) with the parallel light pipe; the green light is distributed with the collimator laser (1) and emits the green light beam to enter the first wedge beam splitter (7) to transmit after passing through the first polaroid (4), the red light is distributed with the collimator laser (2) and emits the red light beam to enter the first wedge beam splitter (7) to reflect after passing through the second polaroid (5), the blue light is distributed with the collimator laser (3) and emits the blue light beam to enter the second wedge beam splitter (8) to reflect after passing through the third polaroid (6); the green light beam is transmitted by the first wedge beam splitter (7), the red light beam is transmitted by the second wedge beam splitter (8) after being reflected by the first wedge beam splitter (7), and then the green light beam and the blue light beam are transmitted to the spatial light modulator (10) after being reflected by the second wedge beam splitter (8) after being transmitted by the second wedge beam splitter and passing through the fourth polaroid (9).
3. The system of claim 1, wherein the single spatial light modulator is used for realizing a color stereoscopic point cloud naked eye display system, and the system comprises: the spatial light modulator (20) is of a reflective type, and the modulation mode is phase modulation.
4. The system of claim 1, wherein the single spatial light modulator is used for realizing a color stereoscopic point cloud naked eye display system, and the system comprises: the lens group (11) is formed by coaxially arranging a plurality of lenses.
5. The method for realizing naked eye display of the color three-dimensional point cloud by using the single spatial light modulator in the display system of any one of claims 1 to 4 is characterized by comprising the following steps: the method comprises the following steps:
1) the first step is as follows:
manufacturing a 3D color point cloud model of an object to be imaged by modeling, wherein the model is divided into three sub-models of RGB (red, green and blue) three channels; carrying out shielding removal and down-sampling treatment on the three submodels, scaling the coordinate scales of the three submodels, controlling the submodel of the red channel R and the submodel of the blue channel B to be respectively positioned at two sides of the submodel of the green channel G, and staggering the three submodels at fixed intervals along the horizontal direction;
the three sub-models respectively emit light beams along a horizontal plane to irradiate the SLM plane, the diffraction integral from each point in the 3D color point cloud model to a pixel on the SLM plane is calculated, and a kinoform is obtained by utilizing the diffraction integral;
2) the second step is that:
and arranging light paths according to the holographic three-dimensional display device, adjusting polaroids at the front ends of the three parallel light pipe lasers (1, 2 and 3) to reduce the brightness of light beams passing through the polaroids to be proper, loading a kinoform on the spatial light modulator (10), adjusting the angles of beam splitters (7 and 8) to superpose target RGB channel components, and observing and viewing a colored 3D image by naked eyes through a lens group (11).
6. The method for realizing naked eye 3D display of the color three-dimensional point cloud by the single spatial light modulator according to claim 5, wherein the method comprises the following steps: in the step 1), the shielding removal specifically comprises the following steps: and rasterizing the XY plane of the sub-model, keeping the points closest to the SLM plane in the same grid, and removing the rest points as redundant points.
7. The method for realizing naked eye display of the color three-dimensional point cloud by the single spatial light modulator according to claim 5, wherein the method comprises the following steps: in the step 1), the coordinate scales of the three submodels are scaled, and the following processing is performed on the three submodels with the same size:
establishing a minimum bounding box outside the sub-model, wherein the minimum bounding box is a cube, scaling the side length size Ltarget of the cube to be less than one third of the image plane size L, and calculating the image plane size L as follows:
L=λz/pix
where λ represents the wavelength of the blue light beam, z represents the distance of the submodel for the green channel G along the optical axis to the SLM plane, and pix represents the single pixel side length in the spatial light modulator.
8. The method for realizing naked eye display of the color three-dimensional point cloud by the single spatial light modulator according to claim 5, wherein the method comprises the following steps: the three submodels are staggered at fixed intervals along the horizontal direction, and specifically comprise: the included angle between the connecting lines from the submodel of the red channel R and the submodel of the green channel G to the center of the SLM plane respectively, and the included angle between the connecting lines from the submodel of the blue channel B and the submodel of the green channel G to the center of the SLM plane respectively are arcsin (dx/z).
9. The method for realizing naked eye display of the color three-dimensional point cloud by the single spatial light modulator according to claim 5, wherein the method comprises the following steps: calculating the diffraction integral of the 3D color point cloud model to each pixel on the SLM plane, and obtaining a kinoform by using the diffraction integral, wherein the method specifically comprises the following steps: and diffracting the light of each point of the three sub-models to the complex amplitude of the same pixel of the SLM plane for addition, and extracting the phase of the result after the addition at each pixel of the SLM plane to form an information graph.
10. The method for realizing naked eye display of the color three-dimensional point cloud by the single spatial light modulator according to claim 4, wherein the method comprises the following steps: in the step 2), the polarizing plates at the front ends of the three lasers (1, 2, 3) with parallel light tubes are adjusted to minimize the brightness of the light beams passing through the polarizing plates, specifically: firstly, the brightness of light beams emitted by the three parallel light tube lasers (1, 2 and 3) after penetrating through the polaroids is adjusted to be the lowest by utilizing the polaroids in front of the three parallel light tube lasers (1, 2 and 3), and then the polaroids in front of the three parallel light tube lasers (1, 2 and 3) are finely adjusted, so that the color of the light beams emitted from the second wedge-shaped beam splitter (8) is the same as the color of the original object to be imaged, and the RGB color is normal.
Background
And 3D displaying stereoscopic vision information of object depth feeling by using the color naked eyes. The method can completely record and reconstruct the wave front of a three-dimensional object, provide all depth information required by a human visual system, and more truly reproduce the same scenes in the objective world, and the three-dimensional display naked-eye 3D display technology with color and large field angle is one of the current research hotspots. Color naked eye 3D display technologies fall into two general categories: a class of three SLMs: the RGB three-color light respectively illuminates three SLMs to reconstruct a 3D image view field on a reconstruction plane, the scheme needs a complex light path design to ensure that the reconstructed images of RGB three channels are accurately combined together, the system cost is too high, and the integral volume is greatly increased; another class reconstructs 3D image fields of view with a single SLM. At present, a color display method based on a single-chip spatial light modulator mainly comprises a time division multiplexing method, a space division method and a space superposition method.
The time division multiplexing method comprises the following steps: the single-chip SLM is illuminated by periodically illuminating Red, Green and Blue light, and an RGB (Red, Green, Blue) three-channel kinoform is periodically displayed on the SLM, so that color projection is realized. The time division multiplexing method has high requirement on system synchronism, the scheme requires that the spatial light modulator has a high frame rate, and human eyes feel a time-synthesized color image through an integral effect after the rate reaches a certain degree. In the principle of the scheme, for one color component, energy is lost on the time axis, so the imaging effect of the color component is influenced to a certain extent. The method needs to accurately control the working time of the RGB light source and the synchronism of loading the corresponding RGB color component hologram, which has higher requirement on the response speed of hardware for loading the hologram.
The space division method comprises the following steps: the single-chip SLM plane is divided into three regions, RGB three-channel kinoforms are loaded respectively, three color lights illuminate the three regions respectively, a beam shaping system is needed to ensure that the wavefront of illumination light is matched with the divided regions, complexity is improved, and the utilization rate of a single-chip modulator is not high.
The space superposition method comprises the following steps: the RGB three-channel image is coded in the same kinoform, the pixel utilization rate is high, a time-sharing system is not needed, and the single-chip SLM color projection system is simple in structure. However, when one of the RGB three-color lights is irradiated alone, in addition to restoring the image of its own color channel, the images of other color channels are reproduced, and these images are overlapped on the image plane, which causes serious noise and extraneous image problems, so that it is necessary to eliminate the extraneous signal by applying pinhole filtering while not blocking the effective signal. The space superposition method has simpler light path and system than other methods, and provides great potential advantages for the miniaturization of the color stereoscopic naked eye display equipment. At present, few researches on reducing image plane noise and eliminating irrelevant images in a spatial superposition method are carried out based on the method, and the method does not give full play to the advantages of the method at present.
Disclosure of Invention
In order to solve the problems existing in the background technology, the invention provides a single spatial light modulator naked eye display system and method for color three-dimensional point cloud which is low in adjustment complexity and can be observed by naked eyes, and the problems of small visual angle, large calculation amount and low speed in manufacturing a kinoform are solved.
The technical scheme adopted by the invention is as follows:
a single spatial light modulator realizes the naked eye display system of the color three-dimensional point cloud, namely the single spatial light modulator realizes the naked eye 3D display system of the color three-dimensional point cloud:
the device comprises a spatial light modulator, a collimator laser, a polaroid, a wedge-shaped beam splitter and a lens group; the three parallel light pipe lasers emit light beams with different colors, the light beams are respectively incident to the wedge-shaped beam splitter after passing through the respective polaroids, are incident to the spatial light modulator after being converged and combined by the wedge-shaped beam splitter and then are reflected by the spatial light modulator and then are incident to human eyes through the lens group.
The three lasers with the parallel light pipes are respectively a green light laser with the parallel light pipes, a red laser with the parallel light pipes and a blue laser with the parallel light pipes; in specific implementation, the three parallel light pipe lasers are a parallel light pipe wavelength 520nm (green) solid laser, a parallel light pipe wavelength 635nm (red) semiconductor laser and a parallel light pipe wavelength 450nm (blue) semiconductor laser respectively.
The green light balancing collimator laser emits green light beams which are transmitted to the first wedge-shaped beam splitter after passing through the first polarizing film, the red light balancing collimator laser emits red light beams which are transmitted to the first wedge-shaped beam splitter after passing through the second polarizing film and are reflected, and the blue light balancing collimator laser emits blue light beams which are transmitted to the second wedge-shaped beam splitter after passing through the third polarizing film and are reflected; the green light beam is transmitted through the first wedge-shaped beam splitter, the red light beam is transmitted through the second wedge-shaped beam splitter after being reflected by the first wedge-shaped beam splitter, and then the green light beam and the blue light beam are transmitted to the spatial light modulator through the fourth polaroid after being reflected by the second wedge-shaped beam splitter.
The spatial light modulator is of a reflective type, and the modulation mode is phase modulation.
The lens group is formed by coaxially arranging a plurality of lenses.
Secondly, a method for realizing naked eye 3D display of color three-dimensional point cloud by using a single spatial light modulator comprises the following steps:
1) the first step is as follows:
manufacturing a 3D color point cloud model of an object to be imaged, wherein the model is divided into three sub-models of RGB three channels; carrying out shielding removal and down-sampling treatment on the three submodels, scaling the coordinate scales of the three submodels, as shown in figure 1, and controlling the submodel of the red channel R and the submodel of the blue channel B to be respectively positioned at two sides of the submodel of the green channel G, wherein the three submodels are staggered at fixed intervals along the horizontal direction; the three submodels of the RGB channels of the 3D color point cloud model are kept at fixed intervals so that they are completely spaced apart from each other.
As shown in fig. 1, the three submodels respectively emit light beams along a horizontal plane to irradiate onto an SLM plane representing the spatial light modulator, calculate diffraction integrals from each point in the 3D color point cloud model to pixels on the SLM plane representing the spatial light modulator, and obtain a kinoform by using the diffraction integrals;
the SLM plane is a liquid crystal plane of the spatial light modulator.
In a specific implementation, three submodels are used to produce a point sequence with coordinate format XYZRGB.
2) The second step is that:
and arranging light paths according to the holographic three-dimensional display device, adjusting polaroids at the front ends of the three parallel light pipe lasers to reduce the brightness of light beams passing through the polaroids to be the lowest and proper, loading a kinoform on the spatial light modulator, adjusting the angle of a beam splitter to superpose target RGB channel components, and observing a color 3D image with a certain depth by naked eyes through a lens group.
The target RGB channel components are specifically 9 images obtained by diffraction of the spatial light modulator, and in single SLM spatial superposition method color imaging, 3 images are equal in size, have red, green and blue colors respectively, are clear at the same position, and can obtain complete color imaging after superposition; these 3 images are referred to as target RGB channel components.
In the step 1), shielding is removed, namely opaque objects observed in reality are simulated, only the front side of a 3D color point cloud model seen by human eyes at an observation position in the figure 3 is reserved, and the shielded back point cloud is removed; the method specifically comprises the following steps: and rasterizing an XY plane of the submodel, wherein the XY plane of the submodel is a plane perpendicular to an optical axis where the emitted light beam is located, retaining points which are closest to the SLM plane in the same grid, and removing the rest points as redundant points so as to achieve the effect of removing the redundant points with shielding relation.
In the step 1), the point cloud density is reduced by down-sampling, so that the subsequent calculation is faster.
In the step 1), the coordinate scales of the three submodels are scaled point clouds, and the following processing is performed on the three submodels with the same size:
establishing a minimum bounding box outside the sub-model, wherein the minimum bounding box is a cube, scaling the side length size Ltarget of the cube to be less than one third of the image plane size L, and calculating the image plane size L as follows:
L=λz/pix
where λ represents the wavelength of the blue light beam, z represents the distance of the submodel for the green channel G along the optical axis to the SLM plane, and pix represents the single pixel side length in the spatial light modulator.
The three submodels are staggered at fixed intervals along the horizontal direction, and specifically comprise: the included angle between the connecting line from the submodel of the red channel R and the submodel of the green channel G to the center of the SLM plane respectively, and the included angle between the connecting line from the submodel of the blue channel B and the connecting line from the submodel of the green channel G to the center of the SLM plane respectively are arcsin (dx/z), so that target RGB channel components are accurately superposed during reproduction, and the aliasing of unrelated images is prevented, so that the pinhole filtering is not needed during reproduction.
And connecting the submodel to the center of the SLM plane, specifically, connecting the average coordinate of all point coordinates in the point cloud of the submodel to the center of the SLM plane.
This separates the three channel components RGB and keeps a certain separation dx in the imaging plane, which is the plane perpendicular to the optical axis in which the three submodels lie.
Calculating the diffraction integral of the 3D color point cloud model to each pixel on the SLM plane, and obtaining a kinoform by using the diffraction integral, wherein the method specifically comprises the following steps: and diffracting the light of each point of the three sub models to the complex amplitude of the same pixel of the SLM plane for addition, wherein the diffraction integral is the complex amplitude, and the result obtained after the addition at each pixel of the SLM plane extracts the phase respectively to form an information graph.
And subsequently, the imaging is from the same SLM plane by modulating according to the kinoform, namely, the naked eye observation visual angle is ensured to be the same, and the elements are reduced, so that the adjustment difficulty is reduced. The GPU acceleration operation programmed by CUDA improves the operation speed by 10 times, and each point is 4ms averagely.
In the step 2), the polarizing plates at the front ends of the three parallel light pipe lasers are adjusted to minimize the brightness of the light beams passing through the polarizing plates, specifically:
the method comprises the steps of firstly, utilizing polarizing plates in front of three parallel light pipe lasers to adjust the brightness of light beams emitted by the three parallel light pipe lasers after penetrating through the polarizing plates to be the lowest to prevent eyes from being damaged, and then finely adjusting the polarizing plates in front of the three parallel light pipe lasers to enable the color of the light beams emitted from the second wedge-shaped beam splitter to be consistent with the color of an original object to be imaged, wherein the RGB color is normal.
The lens group projects the constructed object and hologram to a spatial position set when calculating the kinoform.
As shown in fig. 3, a virtual lens image and a real holographic image are formed between the spatial light modulator and the lens group, the virtual lens image is close to the spatial light modulator, and the real holographic image is close to the lens group.
Human eyes see through from the lens group and observe the lens virtual image, the size of the lens virtual image observed by human eyes:
wherein Sv represents a lens virtual image, v represents an optical path from the lens virtual image to the center of the lens group, and u represents an optical path from the holographic real image to the center of the lens group. The lens group is used for enlarging the holographic real image and increasing the observation visual angle.
And a flat plate is placed at the imaging position of the holographic real image, the holographic real image is observed by human eyes at the same side of the spatial light modulator, the holographic real image is provided with a real image part and a virtual image part, and a light beam emitted by the spatial light modulator is irradiated to the flat plate and is formed by diffuse reflection.
The invention adopts the shielding removal and the down-sampling treatment to remove the shielded part of the back of the model at the visual angle of the human eye observation position, and the shielded part is matched with the opaque object observed in reality, so that the imaging is more real, and the calculated amount is reduced. The problems that a spatial superposition algorithm of single spatial light modulator color imaging needs small holes, adjustment is complex, imaging has diffraction circular spots, and naked eye observation effect is poor are solved. The algorithm adopted by the invention directly scales and shifts the point cloud coordinates of the RGB sub-model, ensures that 3 images with the same color obtained by a monochromatic light source through SLM diffraction are not overlapped, and also ensures that the RGB channel components of the target are overlapped at the observation position through the adjustment of the light source angle, and ensures that the naked eye observation visual angles of the RGB channel components are the same. The algorithm adopted by the invention reduces the number of used elements, greatly reduces the difficulty of light path adjustment and improves the naked eye observation effect. Is a potential solution for large-view angle holographic head-mounted reconstruction.
The system projects a three-dimensional object, and a kinoform is loaded to a spatial light modulator to diffract a three-dimensional real image of the three-dimensional object; the short-focus large-aperture lens group is used for concentrating the diffracted waves in an observation distance area, the lens group converts the holographic real image into a lens virtual image for observation, and the visible visual angle of the three-dimensional image is increased.
According to the invention, a color point cloud kinoform is manufactured through modeling, the point cloud is subjected to shielding removal and down-sampling, and a new color model diffraction integral space superposition method is provided, so that RGB superposed imaging does not depend on small-hole filtering, the complexity of single SLM color imaging adjustment can be greatly reduced, the kinoform calculation process is accelerated by using a GPU, and a color three-dimensional point cloud model can be observed by naked eyes.
The principle of the invention for realizing dynamic holographic three-dimensional display is as follows: the method comprises the steps of manufacturing a color point cloud model, conducting unblocking and downsampling on point cloud, conducting diffraction integral numerical operation accelerated by a GPU to obtain a kinoform, conveniently observing the color three-dimensional point cloud model through naked eyes by adopting a new space superposition method, repeatedly rotating the point cloud according to a certain angle interval, repeatedly manufacturing the kinoform to obtain a kinoform sequence, and rapidly switching the kinoform sequence on an SLM to achieve dynamic display.
Due to the application of the technical scheme, compared with the prior art, the invention has the following advantages:
1) compared with the prior art, the imaging is three-dimensional mostly by clear blurring of two surfaces with different distances in the prior art, the technical scheme provided by the invention adopts color point cloud data as a three-dimensional model, the depths of points are different, and the points can be regarded as a set of points on a plurality of planes with different distances, so that the method is more detailed and accords with a real scene;
2) in the prior art, a rough plane such as a light screen or frosted glass is adopted to scatter a high-brightness real image, so that the real image is convenient for human eyes to observe, the normal observation brightness can be realized only by needing higher laser power, most of light energy is scattered and wasted, and the imaging environment is ensured to have no stray light interference; the invention provides naked eye observation with high light energy utilization rate and finer imaging, a low-power laser can also meet the requirement, and the influence of environment stray light on the naked eye observation is small;
3) the invention innovatively colors a monochromatic point cloud model and directly manufactures color point cloud, and accelerates the diffraction integral calculation time by 9.1 times by adopting the GPU calculation programmed by CUDA (compute unified device architecture) by integrating the prior art such as shielding removal and down-sampling;
4) in the existing spatial light modulator imaging method, a kinoform made by an overlay method commonly uses a bi-phase grating, a spherical phase grating and a blazed grating to enable spherical convergence points of three images formed by a monochromatic light source to have different offsets, and then a small hole is used in a light path to filter out irrelevant images, so that three target RGB components in nine images with different colors are reserved; the small holes are easy to filter unclean, and cause image surface residual irrelevant image noise. The invention adopts the pretreatment of scaling and deviation aiming at the point cloud coordinates, thereby ensuring that three images with the same color generated by the monochromatic light source are not overlapped; thus, the program code is simpler, the required components are fewer, and the optical path adjustment is easier.
5) According to the existing spatial light modulator imaging method, a kinoform made by an overlay method adopts a double-phase and spherical phase, and circular diffraction spots can be seen by naked eyes, so that the imaging effect is seriously influenced; the invention has high imaging quality and no defects.
Drawings
FIG. 1 is a schematic diagram of a 3D color point cloud model according to the present invention;
FIG. 2 is a schematic view of an optical path system apparatus of the present invention;
FIG. 3 is a schematic diagram of spatial position distribution of a holographic real image, a lens virtual image and a human eye position when a 3D holographic image is observed by naked eyes;
FIG. 4 shows a lotus point cloud model;
FIG. 5 is a color lotus dot cloud picture in which the portion of the back that is occluded under the current viewing angle is removed, and each point is given a color according to the distance from the viewpoint;
FIG. 6 is an image of a color lotus observed on a light screen;
FIG. 7 is a color lotus diagram observed by naked eyes through a lens set;
FIG. 8 is an example of imaging with single SLM color imaging using a conventional spatial stacking algorithm with extraneous image aliasing;
FIG. 9 is an example of a single SLM color imaging using a conventional spatial superposition algorithm with annular diffraction spots;
in the figure: the device comprises a spatial light modulator (10), parallel light pipe lasers (1, 2 and 3), polarizing plates (4, 5, 6 and 9), wedge-shaped beam splitters (7 and 8) and a lens group (11).
Detailed Description
The invention is further described with reference to the accompanying drawings and the detailed description.
As shown in fig. 2, the optical path includes a spatial light modulator 10, collimator lasers 1, 2, 3, polarizers 4, 5, 6, 9, wedge beam splitters 7, 8, and a lens group 11; the three parallel light pipe lasers 1, 2 and 3 emit light beams with different colors, the light beams are respectively incident to the wedge beam splitters 7 and 8 after passing through the polarizing plates 4, 5 and 6, the light beams are converged and combined by the wedge beam splitters 7 and 8, then are incident to the spatial light modulator 10 through the polarizing plate 9, and are reflected by the spatial light modulator 10 and then are incident to human eyes through the lens group 11 to be imaged.
The three balancing parallel light pipe lasers 1, 2 and 3 are R, G, B balancing parallel light pipe lasers 1, 2 and 3 with three colors respectively, and are a green light balancing parallel light pipe laser 1, a red light balancing parallel light pipe laser 2 and a blue light balancing parallel light pipe laser 3 respectively; the green light balancing collimator laser 1 emits a green light beam which is transmitted to the first wedge-shaped beam splitter 7 after passing through the first polaroid 4, the red light beam emitted by the red light balancing collimator laser 2 is transmitted to the first wedge-shaped beam splitter 7 after passing through the second polaroid 5 and is reflected, and the blue light beam emitted by the blue light balancing collimator laser 3 is transmitted to the second wedge-shaped beam splitter 8 after passing through the third polaroid 6 and is reflected; the green light beam is transmitted through the first wedge-shaped beam splitter 7, the red light beam is reflected through the first wedge-shaped beam splitter 7 and is incident to the second wedge-shaped beam splitter 8 together for transmission, and then is incident to the spatial light modulator 10 through the fourth polarizer 9 together with the blue light beam reflected through the second wedge-shaped beam splitter 8.
In specific implementation, the green light-matching parallel light pipe laser 1, the first polarizing film 4, the first wedge-shaped beam splitter 7, the second wedge-shaped beam splitter 8, the fourth polarizing film 9 and the spatial light modulator 10 are all arranged along a main optical axis of the same straight line.
The optical axes of the red light balancing collimator laser 2 and the second polarizing film 5 are arranged at a certain deflection angle with the main optical axis, the optical axes of the blue light balancing collimator laser 3 and the third polarizing film 6 are arranged at a certain deflection angle with the main optical axis, and the RGB three-beam parallel light of the green light beam, the red light beam and the blue light beam is converged at the spatial light modulator 10.
The position where the green beam is transmitted to the first wedge-shaped beam splitter 7 and the position where the red beam is reflected to the first wedge-shaped beam splitter 7 are not overlapped on the first wedge-shaped beam splitter 7, the position where the green beam is transmitted to the second wedge-shaped beam splitter 8, the position where the red beam is transmitted to the second wedge-shaped beam splitter 8 and the position where the blue beam is reflected to the second wedge-shaped beam splitter 8 are not overlapped on the second wedge-shaped beam splitter 8.
The spatial light modulator 20 is a reflective type, and the modulation mode is phase modulation.
The lens assembly 11 is a short-focus large-aperture lens assembly and is formed by coaxially arranging a plurality of lenses so as to increase the visual angle and enhance the display effect. Short coke means within 10 mm. By large aperture is meant a lens with a diameter greater than 30 mm.
The three parallel light pipe lasers 1, 2 and 3 are all integrated semiconductor lasers and serve as light sources.
The embodiment of the invention and the implementation process thereof are as follows:
1) the first step is as follows:
A3D color point cloud model of an object to be imaged is manufactured through modeling, as shown in figure 4, the original point cloud is a lotus flower, is a 360-degree scan of the lotus flower model and has an xyz coordinate of 81000 points.
1.1) dividing a 3D color point cloud model of lotus into three submodels of RGB three channels;
in the aspect of point cloud coloring, an additional three coordinates RGB are adopted to represent the color component of each point. The proportion of white lotus flowers, namely all points RGB is 1: 1: 1. or each point may be assigned a different color depending on the distance to the viewpoint.
And 1.2) sequentially carrying out shielding removal and down-sampling treatment on the three sub-models, and simulating the shielding relation of a real scene by taking the direction vertical to the XY plane as the sight direction.
Removing occlusion, namely simulating an opaque object observed in reality, only keeping the front side of the 3D color point cloud model seen by human eyes at an observation position' in the figure 3, and removing occluded back point cloud; the method specifically comprises the following steps: and rasterizing an XY plane of the submodel, wherein the XY plane of the submodel is a plane perpendicular to an optical axis where the emitted light beam is located, retaining points which are closest to the SLM plane in the same grid, and removing the rest points as redundant points so as to achieve the effect of removing the redundant points with shielding relation. Keeping points closer to an observer in the same grid so as to achieve the effect of removing redundant points with shielding relation; the size of the grid is customized, and only one point of each grid is reserved so as to achieve the purpose of down-sampling. The results are shown in FIG. 5.
And the down-sampling is to reduce the density of the point cloud, so that the subsequent calculation is faster.
1.3) then scaling the coordinate scale of the three submodels, namely scaling point clouds, aiming at the three submodels with the same size, the following processing is carried out: establishing a minimum bounding box outside the sub-model, wherein the minimum bounding box is a cube, scaling the side length size Ltarget of the cube to be less than one third of the image plane size L, and calculating the image plane size L as follows:
L=λz/pix
where λ represents the wavelength of the blue light beam, z represents the distance of the submodel for the green channel G along the optical axis to the SLM plane, and pix represents the single pixel side length in the spatial light modulator.
1.4) as shown in FIG. 1, controlling the submodel of the red channel R and the submodel of the blue channel B to be respectively positioned at two sides of the submodel of the green channel G, and staggering the three submodels at fixed intervals along the horizontal direction; the three submodels of the RGB channels of the 3D color point cloud model are kept at fixed intervals so that they are completely spaced apart from each other.
The three submodels are staggered at fixed intervals along the horizontal direction, and specifically comprise: the included angle between the connecting line from the submodel of the red channel R and the submodel of the green channel G to the center of the SLM plane respectively, and the included angle between the connecting line from the submodel of the blue channel B and the connecting line from the submodel of the green channel G to the center of the SLM plane respectively are arcsin (dx/z), so that target RGB components are accurately superposed during reproduction, and the aliasing of unrelated images is prevented, so that the pinhole filtering is not needed during reproduction.
The imaging distance z is 400mm, the calculated L is 22.5mm, the size of the target image plane should be smaller than 22.5/3 and 7.5mm, the size Ltarget of the target image plane is 5mm, and the position of the RGB point cloud in space keeps a certain interval dx which is 7.5 mm.
And connecting the submodel to the center of the SLM plane, specifically, connecting the average coordinate of all point coordinates in the point cloud of the submodel to the center of the SLM plane.
This separates the three channel components RGB and keeps a certain separation dx in the imaging plane, which is the plane perpendicular to the optical axis where the three submodels lie.
In a specific implementation, three submodels are used to produce a point sequence with coordinate format XYZRGB. As in the following table:
TABLE 1 XYZRGB Point cloud sequence segment
1.5) as shown in fig. 1, the three submodels respectively emit light beams along a horizontal plane to irradiate onto the SLM plane representing the spatial light modulator 10, calculate the diffraction integral of the 3D color point cloud model to each pixel of the SLM plane representing the spatial light modulator 10, and obtain a kinoform by using the diffraction integral.
The method comprises the following steps: light of each point of the three sub-models is diffracted to the complex amplitudes of the same pixel of the SLM plane to be added, diffraction integration is specifically the complex amplitudes, and after the light is added at each pixel of the SLM plane, the result extracts phases respectively to form a kinoform.
The kinoform operation process can adopt GPU acceleration operation programmed by CUDA, specifically, a common matrix type is converted into gpuArray in matlab, namely CUDA acceleration built in matlab is called, after acceleration, the time consumption of diffraction integral calculation of each point is 1/9.1 of the time consumption of CPU calculation, and the average time consumption of each point calculation is 32 ms.
TABLE 2 GPU comparison of time consumption before and after acceleration, de-occlusion, and downsampling
Device/point count
81000 points (original point cloud)
28724 points (De-occlusion, down-sampling)
CPU
Consuming time 23740s
Consuming 7863s
GPU
Consuming 2599s
Time consumption 835s
2) The second step is that:
the light path is arranged according to the holographic three-dimensional display device, and the beam splitters 7 and 8 are adjusted to enable the red light of the parallel light pipe laser 2, the blue light of the parallel light pipe laser 3 and the green light of the parallel light pipe laser 1 to form arcsin (dx/z) angles, so that RGB three-channel components of a holographic image obtained by subsequent SLM diffraction can be conveniently superposed on a spatial position.
The SLM puts in the phase place hologram of first step preparation, lights RGB laser instrument, adjusts the polaroid and makes the light beam be the brightest state, observes diffuse reflection formation of image with the light screen before convex lens, adjusts the upper and lower screw button of two beam splitters and then adjusts the angle of beam splitter for it makes red, blue image and green image coincide completely to locate at 400mm at z. The imaging results observed at 400mm z using a light screen are shown in figure 6.
And the polaroids at the front ends of the three laser devices 1, 2 and 3 are adjusted to reduce the brightness of the light beams passing through the polaroids to the minimum, a kinoform is loaded on the spatial light modulator 10, and 3D imaging with a certain depth is observed by naked eyes through the lens group 11.
In the step 2), the polaroid is adjusted to enable the brightness of the RGB light beams to be the lowest, and naked eye 3D image observation is carried out when the RGB light beams are almost invisible under indoor normal ambient light. The convex lens is placed at the position of an extension line of the light screen imaging:
firstly, the polaroids in front of the three parallel light pipe lasers 1, 2 and 3 are used for adjusting the brightness of light beams emitted by the three parallel light pipe lasers 1, 2 and 3 after penetrating through the polaroids to be the lowest to prevent eyes from being damaged, and then the polaroids in front of the three parallel light pipe lasers 1, 2 and 3 are finely adjusted, so that the color of the light beam emitted from the second wedge-shaped beam splitter 8 is consistent with the color matching of an original object to be imaged, and the RGB color is normal.
The lens group projects the constructed object and hologram to the spatial position set when calculating the kinoform, i.e. z is 400 mm.
As shown in fig. 3, a lens aerial image and a holographic real image are formed between the spatial light modulator 10 and the lens group 11, the lens aerial image is close to the spatial light modulator 10, and the holographic real image is close to the lens group 11.
And placing a light screen at the imaging position of the real holographic image with the z being 400mm, seeing the real holographic image on the light screen at the same side of the spatial light modulator 10, and finely adjusting the screw buttons of the beam splitters 7 and 8 to enable the target RGB channel components to be superposed. The real image is illuminated by a beam of light from the spatial light modulator 10 onto the light screen and is observed by the human eye as a result of the diffuse reflection of the real image by the light screen.
The human eye sees the lens virtual image through observing from the lens group 11, and the size of the lens virtual image observed by the human eye is as follows:
wherein Sv represents the size of a lens virtual image, v represents the optical path from the lens virtual image to the center of the lens group, and u represents the optical path from a holographic real image to the center of the lens group. The lens group is used for amplifying the holographic real image and increasing the observation visual angle; as shown in fig. 3, the included angle between the eye and the connecting line between the two ends of the virtual image of the lens is larger than the included angle between the eye and the connecting line between the two ends of the real holographic image, i.e. the size of the image is increased by the lens group, and the visual angle of the image to the eye is also increased.
The eyes move to the position close to the convex lens, and move up and down, left and right by a small amplitude until the color three-dimensional holographic lotus is found; the screws of the beam splitters 7, 8 are again fine-tuned to make the RGB three-channel components completely coincide. The camera is placed at the viewer's position of the human eye and the image is taken as shown in fig. 6 and 7.
In conclusion, the color point cloud naked eye 3D display system and method creatively use the color point cloud stereo model to realize true 3D naked eye observation of the color model, and solve the problems of complex adjustment and poor naked eye imaging effect of the traditional single SLM space superposition method. The traditional spatial superposition algorithm uses a combination of two phases and a spherical phase to form a diffraction convergent point, a blazed grating is superposed on a kinoform to enable the spherical convergent point of three images formed by a monochromatic light source to have different offsets, three target RGB channel components in nine images obtained by SLM diffraction are reserved at the convergent point by using a small hole, and the three components are finely adjusted to be completely superposed. The algorithm adopted by the invention directly scales and shifts the point cloud coordinates of the RGB sub-model, so that three images with the same color obtained by a monochromatic light source through SLM diffraction are not overlapped, and the target RGB channel components are overlapped at an observation position through light source angle adjustment, thereby ensuring that the naked eye observation visual angles of the RGB channel components are the same. The algorithm adopted by the invention reduces the number of used elements, greatly reduces the difficulty of light path adjustment and improves the naked eye observation effect.
FIG. 8 shows that when a conventional spatial superposition algorithm is used for single SLM color imaging, speckle noise caused by unclean pinhole filtering is large; fig. 9 shows that when the conventional spatial superposition algorithm is used for single SLM color imaging, the lower left corner of the picture has circular diffraction spots caused by spherical phase. The present invention addresses these problems by using methods.
Therefore, the technical scheme provided by the invention realizes a naked eye 3D display system with color dots, a wide viewing angle and high definition by using a single spatial light modulator. The optical system is currently directed to commercial VR helmets. Within the range allowed by aberration, the configuration display system is compact, the problem of irrelevant image aliasing of single spatial light modulator color imaging is solved, compared with the traditional method, the naked eye imaging quality is improved, the element adjustment complexity is reduced, and an effective way is provided for the miniaturization of color stereoscopic naked eye display equipment.
- 上一篇:石墨接头机器人自动装卡簧、装栓机
- 下一篇:钟表零件、机芯和钟表