Polarization maintaining optical fiber in virtual/augmented reality system
1. A display subsystem for a virtual image generation system for use by an end user, comprising:
a composite fiber that is a composite of a polarization maintaining PM transmission fiber section and a non-PM scanning fiber section spliced to the PM transmission fiber section;
a light source configured to inject a linearly polarized light beam into the PM transmitting optical fiber section such that the linearly polarized light beam is emitted from the non-PM scanning optical fiber section; and
a display configured to receive the light beam and generate an image for the end user based on the light beam,
wherein the non-PM scanning fiber section is spliced to the PM transmission fiber section within a drive assembly.
2. The display subsystem of claim 1, wherein the display comprises a planar waveguide apparatus.
3. The display subsystem of claim 1, wherein the PM transmission fiber section includes a cladding having a circular asymmetric cross-section.
4. The display subsystem of claim 1, wherein the PM transmission fiber section includes a circularly symmetric cladding, and at least one additional element configured to induce strain in the cladding.
5. The display subsystem of claim 1, wherein the display is integrated into a head-mounted unit and the light source is contained in a remote control unit configured to be worn by the end user and remote from the head-mounted unit.
6. The display subsystem of claim 5, wherein the control unit is configured to be worn on a torso of the end user.
7. The display subsystem of claim 5, wherein the control unit is configured to be worn on a waist of the end user.
8. The display subsystem of claim 5, wherein the PM transmission fiber section is routed between the remote control unit and the head mounted unit.
9. The display subsystem of claim 1, wherein the display is configured to be positioned in front of the eyes of the end user.
10. The display subsystem of claim 9, wherein the display has a partially transparent display surface configured to be positioned in a field of view between the end user's eyes and a surrounding environment.
11. The display subsystem of claim 1, further comprising a frame structure configured to be worn by the end user, the frame structure carrying the display.
12. A virtual image generation system for use by an end user, comprising:
a memory storing a three-dimensional scene;
a control subsystem configured to render a plurality of composite image frames of the three-dimensional scene; and
the display subsystem of claim 1, wherein the display subsystem is configured to sequentially display the plurality of image frames to the end user.
13. The virtual image generation system of claim 12, wherein the control subsystem comprises a Graphics Processing Unit (GPU).
Background
Modern computing and display technology has facilitated the development of systems for so-called "virtual reality" or "augmented reality" experiences, in which digitally reproduced images or portions thereof are presented to a user in a manner in which they appear to be, or may be perceived as, real. Virtual Reality (VR) scenes typically involve the presentation of digital or virtual image information that is opaque to other real-world visual inputs, while Augmented Reality (AR) scenes typically involve the presentation of digital or virtual image information as an enhancement to the visualization of the real world around the end user.
For example, referring to fig. 1, an augmented reality scene 4 is depicted in which a user of AR technology sees a real world park-like setting 6 featuring people, trees, buildings in the background, and a concrete platform 8. In addition to these items, the end user of AR technology also perceives that he "sees" a robotic statue 10 standing on the real world platform 8, as well as a flying cartoon-style avatar character 12, which appears to be an avatar of bumblebee, even though these elements 10, 12 are not present in the real world. The human visual perception system has proven to be very complex and it is challenging to produce a comfortable, natural, rich presentation that facilitates virtual image elements along with other virtual or real-world image elements.
VR and AR systems typically utilize head mounted displays (or head mounted displays or smart glasses) that are at least loosely coupled to the user's head and, thus, move as the end user's head moves. If the display subsystem detects head motion of the end user, the data being displayed may be updated to account for changes in head pose (i.e., the orientation and/or position of the user's head).
As an example, if a user wearing a head mounted display views a virtual representation of a three-dimensional (3D) object on the display and walks around the area where the 3D object appears, the 3D object may be re-rendered for each viewpoint, giving the end user the sensation that he or she is walking around the object occupying real space. If a head mounted display is used to present multiple objects within a virtual space (e.g., a rich virtual world), the measurement of head pose can be used to re-render the scene to match the dynamically changing head position and orientation of the end user and provide enhanced immersion in the virtual space.
AR-enabled (i.e., simultaneous viewing of real and virtual elements) head mounted displays may have several different types of configurations. In one such configuration, often referred to as a "video see-through" display, a camera captures elements of a real scene, a computing system superimposes virtual elements onto the captured real scene, and a non-transparent display presents a composite image to the eyes. Another configuration is commonly referred to as an "optically see-through" display, where an end user can see through transparent (or semi-transparent) elements in the display subsystem to directly view light from real objects in the environment. A transparent element, commonly referred to as a "compositor," superimposes light from the display over the end user's view of the real world.
VR and AR systems typically utilize a display subsystem having a projection subsystem and a display surface positioned in front of the end user's field of view, on which the projection subsystem sequentially projects image frames. In a real three-dimensional system, the depth of the display surface may be controlled at a frame rate or sub-frame rate. The projection subsystem may include: one or more optical fibers into which light from one or more light sources emits different colors of light in a defined pattern; and a scanning device that scans the optical fiber in a predetermined pattern to create image frames that are sequentially displayed to an end user.
In a typical head-mounted VR/AR system, the display subsystem is ideally designed to be as light as possible to maximize user comfort. To this end, the various components of the VR/AR system may be physically contained in a distributed system that includes the display subsystem itself and a control subsystem that may be remotely located from the user's head. For example, the control subsystem may be contained in a waist pack badge that may be secured to the user's waist. Because of weight, heat and form factor considerations, it is desirable to have the light source located away from the user's head (e.g., in a waist pack badge), so the location of the light source must be set with the control subsystem remote from the display.
Therefore, the fiber must be routed (route) from the remote light source to a portion of the display subsystem located on the user's head. For example, referring to FIG. 2, both a transmission function and a scanning function are performed using a single mode fiber 20 by propagating light from a remote light source 22 (transmission function) to a scanning device 24, wherein the single mode fiber 20 is manipulated to thereby scan the light in a predetermined scanning pattern (scanning function).
In order to prevent color distortion of an image displayed to a user (after polarized laser light propagates through a diffractive optical system of a display having high polarization sensitivity), it is necessary to maintain the polarization of light injected into an optical fiber from a light source throughout the optical fiber. In a conventional fiber, the two polarization modes (e.g., vertical and horizontal polarizations) have the same nominal phase velocity due to the circular symmetry of the fiber. However, a slight amount of random birefringence in such a fiber or bends in the fiber will result in a slight amount of crosstalk from the vertical to the horizontal polarization modes. Also, since even a short section of fiber, where a small coupling coefficient can be applied, has a length of thousands of wavelengths, even a small coupling between two polarization modes applied coherently results in a large power transmission to the horizontal mode, thus completely changing the net polarization state of the wave. Since the coupling coefficient is unintended and a result of any stress or bend applied to the fiber, the output state of polarization will itself be random and vary with changes in these stresses or bends.
Thus, due to the tortuous path that the optical fiber must route between the remote light source and the head mounted display subsystem (which may be quite long as it spans the neck and torso of a human body), the optical fiber may bend differently (due to body movements of the user, etc.) and thus strain, thereby significantly changing the polarization of light traveling through the optical fiber.
It is known to use a polarization maintaining fiber (PM fiber), which is a single mode fiber in which linearly polarized light (if properly launched into the fiber) remains linearly polarized during propagation, leaving the fiber in a particular linear polarization state. PM fibers maintain linear polarization during propagation by intentionally introducing systematic linear birefringence in the fiber, so there are two well-defined polarization modes that propagate along the fiber at very different phase velocities. A variety of different PM fiber designs can be used to create birefringence in the fiber. For example, the fiber may be geometrically asymmetric, or have an asymmetric refractive index profile, such as a design using an elliptical cladding, a design using a rod of another material within the cladding, or a design using a structured core fiber (e.g., a photonic band gap fiber) to permanently induce stress in the fiber.
While the projection subsystem may be designed with PM fiber in mind, pre-existing scanning devices are designed to operate with non-PM fiber that exhibits different mechanical properties than PM fiber. In this case, the scanning device will not operate in the same manner when the non-PM fiber is replaced by a polarization maintaining fiber. In particular, a PM fiber with a highly asymmetric bending stiffness will have very different dynamic characteristics than a non-PM fiber with a symmetric bending stiffness. Therefore, the PM fiber will not implement a resonant helical scan as a default scan mode for the display device. In addition, the distal end of the fiber is typically tapered to increase the operating frequency and thus resolution, resulting in better performance. However, in the case of PM fibers utilizing stress-inducing elements, these additional elements will affect the scan field of the scanning device. Therefore, the scanning equipment must be redesigned to accommodate the alternative use of PM fiber, which will result in additional cost for the VR/AR system.
Therefore, there is a need for a low cost solution that maintains linear polarization in the optical fiber connected between the remote light source and the head mounted display subsystem without having to redesign the scanning device in the display subsystem.
Disclosure of Invention
Embodiments of the present invention relate to devices, systems, and methods that facilitate virtual reality and/or augmented reality interactions for one or more users.
According to a first aspect of the invention, a display subsystem for a virtual image generation system for use by an end user includes a display (e.g., a planar waveguide apparatus). In one embodiment, the display may be configured to be positioned in front of the eyes of the end user. In this case, the display may have a partially transparent display surface configured to be positioned in a field of view between the eyes of the end user and the surrounding environment.
The display subsystem further includes an optical fiber having a Polarization Maintaining (PM) transmission fiber section and a non-PM scanning fiber section. In one embodiment, the transmission fiber section includes a cladding having a circular asymmetric cross-section. In another embodiment, the transmission fiber section comprises a circularly symmetric cladding and at least one additional element configured to induce strain in the cladding.
The display subsystem further comprises: a light source configured to inject a linearly polarized light beam into the transmission fiber section such that the linearly polarized light beam is emitted from the scanning fiber section; a mechanical scanning drive assembly in which the scanning fiber section is fixed, wherein the mechanical scanning drive assembly is configured to displace the scanning fiber section to scan the emitted light beam; and a display configured to receive the scanned beam and generate an image for the end user. In one embodiment, the proximal end of the scanning fiber section is completely fixed within the mechanical scanning drive assembly. In another embodiment, the mechanical scan drive assembly may include a piezoelectric element in which the scanning fiber section is mounted.
In one embodiment, the mechanical scan drive assembly and the display are integrated into a head-mounted unit, and the light source is contained in a remote control unit configured to be worn by the end user and remote from the head-mounted unit. The control unit may for example be worn on the torso or waist of the end user. The scanning fiber section may be routed between the remote control unit and the head-mounted unit.
According to a second aspect of the present invention, a virtual image generation system for use by an end user comprises: a memory storing a three-dimensional scene; a control subsystem (e.g., a control subsystem including a Graphics Processing Unit (GPU)) configured to render a plurality of composite image frames of the three-dimensional scene; and the display subsystem described above. The display subsystem is configured to sequentially display the plurality of image frames to the end user.
Additional and other objects, features and advantages of the present invention are described in the detailed description, drawings and claims.
Drawings
The drawings illustrate the design and utilization of preferred embodiments of the present invention, wherein like elements are designated by common reference numerals. In order to better appreciate how the above-recited and other advantages and objects of the present invention are obtained, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered to be limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings, in which:
FIG. 1 is a picture of a three-dimensional augmented reality scene that may be displayed to an end user by a prior art augmented reality generating device;
FIG. 2 is a plan view of a prior art single mode optical fiber for optically coupling a light source to a scanning device;
FIG. 3 is a block diagram of a virtual image generation system constructed in accordance with an embodiment of the invention;
FIG. 4 is a plan view of one embodiment of a display subsystem for the virtual image generation system of FIG. 3;
FIG. 5 is a perspective view of one embodiment of a display subsystem for the virtual image generation system of FIG. 3;
FIG. 6 is a plan view of a composite optical fiber for optically coupling a light source to a scanning device in the virtual image generation system of FIG. 3;
FIG. 7a is a cross-sectional view of one embodiment of a non-polarization maintaining scanning fiber section of the composite fiber of FIG. 6;
FIG. 7b is a cross-sectional view of one embodiment of a polarization maintaining transmission fiber portion of the composite fiber of FIG. 6;
FIG. 8 is a plan view of an exemplary frame generated by the virtual image generation system of FIG. 3;
FIG. 9a is a plan view of one technique that may be used to wear the virtual image generation system of FIG. 3;
FIG. 9b is a plan view of another technique that may be used to wear the virtual image generation system of FIG. 3;
FIG. 9c is a plan view of yet another technique that may be used to wear the virtual image generation system of FIG. 3; and
fig. 9d is a plan view of yet another technique that may be used to wear the virtual image generation system of fig. 3.
Detailed Description
The following description relates to display subsystems and methods for use in virtual reality and/or augmented reality systems. It should be understood, however, that while the present invention makes it well suited for application in virtual or augmented reality systems, the invention in its broadest aspects may not be so limited.
Referring to FIG. 3, one embodiment of a virtual image generation system 100 constructed in accordance with the present invention will now be described. Virtual image generation system 100 may operate as an augmented reality subsystem that provides images of virtual objects that are intermixed with physical objects in the field of view of end user 50. When operating the virtual image generation system 100, there are two basic approaches. The first approach utilizes one or more imagers (e.g., cameras) to capture images of the surrounding environment. The virtual image generation system 100 blends virtual images with each other into data representing an image of the surrounding environment. The second approach utilizes one or more at least partially transparent surfaces through which the surrounding environment can be viewed and on which virtual image generation system 100 generates images of virtual objects.
The virtual image generation system 100 and the various techniques taught herein may be used in applications other than augmented reality and virtual reality subsystems. For example, the various techniques may be applied to any projection or display subsystem, or may be applied to a miniature projector that is movable by the end user's hand rather than the head. Thus, although often described herein in terms of augmented reality subsystems or virtual reality subsystems, the teachings should not be limited to these subsystems for these purposes.
At least for augmented reality applications, it may be desirable to spatially locate various virtual objects relative to corresponding physical objects in the field of view of end user 50. A virtual object, also referred to herein as a virtual tag or label or mark out, may take any of a variety of forms, essentially any kind of data, information, concept, or logical construct that can be represented as an image. Non-limiting examples of virtual objects may include: a virtual text object, a virtual number object, a virtual alphanumeric object, a virtual tag object, a virtual field object, a virtual chart object, a virtual map object, a virtual instrument object, or a virtual visual representation of a physical object.
To this end, the virtual image generation system 100 includes: a frame structure 102 worn by end user 50; a display subsystem 104, at least a portion of the display subsystem 104 carried by the frame structure 102 such that the display subsystem 104 is positioned in front of the eyes 52 of the end user 50; and a speaker 106 carried by the frame structure 102 such that the speaker 106 is positioned near an ear canal of the end user 50 (optionally, another speaker (not shown) is positioned near another ear canal of the end user 50 to provide stereo/shapeable sound control). Display subsystem 104 is designed to present to eye 52 of end user 50 a photo-based radiation pattern that can be comfortably perceived as an enhancement to physical reality, with a high level of image quality and three-dimensional perception, and capable of presenting two-dimensional content. The display subsystem 104 presents a series of composite image frames at a high frequency that provides perception of a single coherent scene.
Display subsystem 104 includes a projection subsystem 108 and a partially transparent display screen 110, with projection subsystem 108 projecting an image onto display screen 110. Display screen 110 is positioned in the field of view of end user 50 between eyes 52 of end user 50 and the surrounding environment.
In the illustrated embodiment, projection subsystem 108 takes the form of a fiber-optic scanning-based projection device, and display screen 110 takes the form of a waveguide-based display into which scanned light from projection subsystem 108 is injected to produce, for example, images located at a single optical viewing distance (e.g., arm length) closer than infinity, images located at multiple discrete optical viewing distances or focal planes, and/or image layers stacked at multiple viewing distances or focal planes to represent a stereoscopic 3D object. The layers in the light field may be stacked close enough together to appear continuous to the human vision subsystem (i.e., one layer is within the cone of confusion of an adjacent layer). Additionally or alternatively, picture elements may be blended across two or more layers to increase the perceived continuity of transitions between layers in a light field even if the layers are stacked in a sparser manner (i.e., one layer outside the cone of confusion for adjacent layers). The display subsystem 104 may be monocular or binocular.
Referring to fig. 4 and 5, projection subsystem 108 includes a scanning assembly 112 and an optical coupling subsystem 114, scanning assembly 112 generating a light beam and scanning the light beam in a predetermined scanning pattern in response to control signals, and optical coupling subsystem 114 coupling the light beam from scanning assembly 114 to display screen 110.
The scanning assembly 112 includes one or more light sources 116 (only one shown for simplicity) that generate light beams (e.g., emitting different colors of light in a defined pattern). The light source 116 may take any of a variety of forms, such as a set of RGB lasers (e.g., laser diodes capable of outputting red, green, and blue light) operable to produce red, green, and blue coherent, collimated light, respectively, according to a defined pixel pattern specified in a corresponding frame of pixel information or data. Lasers provide high color saturation and are energy efficient.
The scanning assembly 112 further includes one or more optical fibers 118 (only one shown for clarity), each having a proximal end 118a and a distal end 118b, the proximal end 118a receiving the light beam from the light source 116 and the distal end 118b providing the light beam to the partially transparent display screen 110. The scanning assembly 112 further includes a mechanical drive assembly 120, to which the optical fiber 118 is mounted. The drive assembly 120 is configured to displace the distal end 118b of the optical fiber 118, and in the illustrated embodiment, the drive assembly 120 includes a piezoelectric element 122, to which piezoelectric element 122 the optical fiber 118 is mounted.
The scanning assembly 112 further includes drive electronics 124, the drive electronics 124 configured to transmit an electrical signal to the piezoelectric element 122, thereby causing the distal end 118b of the optical fiber 118 to vibrate according to a scanning pattern. Thus, the operation of the light source 116 and the drive electronics 124 are coordinated in a manner to produce such image data: the image data is encoded in the form of spatially and/or temporally varying light.
In the illustrated embodiment, the piezoelectric element 122 takes the form of a hollow tube, in which case the distal end 118b of the optical fiber 118 is received through the piezoelectric tube 122 or through the piezoelectric tube 122. Distal end 118b of optical fiber 118 protrudes from piezoelectric tube 122 as a non-fixed flexible cantilever. Piezoelectric tubes 122 are associated with four quadrant electrodes (not shown). These electrodes may be plated, for example, on the exterior, outer surface, or outer periphery or outer diameter of the piezoelectric tube 122. A core electrode (not shown) is also located at the core, center, inner periphery or inner diameter of the tube 122.
Drive electronics 124 are electrically coupled via leads 126 to drive opposing pairs of electrodes (not shown) to independently bend piezoelectric tube 122 in two axes. The protruding distal end 118b of the optical fiber 118 has a mechanical resonance mode. The resonant frequency depends on the diameter, length and material properties of the optical fiber 118. By vibrating the piezoelectric tube 122 near the first mechanical resonance mode, the fiber distal end 118b is caused to vibrate and the fiber distal end 118b can sweep a larger deflection about the fulcrum. Alternatively, the piezo tube 122 may be vibrated near higher order mechanical resonance modes (e.g., second order modes) such that the fiber distal end 118b sweeps a smaller deflection about the fulcrum.
The fiber distal end 118 is scanned bi-axially in the region of the filling 2D scan by exciting resonance in both axes. The light beam exiting the optical fiber 118 forms an image by modulating the intensity of the light source 116 in synchronization with the scanning of the fiber distal end 118 b. A description of such an arrangement is provided in U.S. patent application serial No. 13/915,530 entitled "Multiple Depth Plane Three-Dimensional Using a Wave Guide Reflector Array Projector," which is expressly incorporated herein by reference.
The optical coupling subsystem 116 includes an optical waveguide input device 128, such as one or more reflective surfaces, diffraction gratings, mirrors, dichroic mirrors, or prisms, to optically couple light to the ends of the display screen 110. Optical coupling subsystem 116 further includes collimating element 130 that collimates light from optical fiber 118. Optionally, optical coupling subsystem 116 includes an optical modulation device (not shown) configured to converge light from collimating element 130 to a focal point located at the center of optical waveguide input device 128, thereby allowing the size of optical waveguide input device 128 to be minimized, as discussed in more detail in U.S. provisional patent application serial No. 62/277,865 entitled Virtual/Augmented Reality System with inverted Angle Diffraction Grating, which is expressly incorporated herein by reference.
Referring now to FIG. 6, each fiber 118 combines the advantages of Polarization Maintaining (PM) fibers with the advantages of non-PM fibers to ensure that the linear polarization of the light beam propagating through the respective fiber 118 is maintained while maintaining the mechanical properties of the portion of fiber 118 associated with scanning device 114. To this end, fiber 118 includes a non-PM fiber scanning section 132 and a PM transmission fiber section 134. Each of the transmission fiber section 132 and the scanning fiber section 134 generally includes a transparent core and a transparent cladding material surrounding the core, the transparent cladding material having a low refractive index, which retains light in the core by a total internal reflection phenomenon, thereby causing the optical fiber to function as a waveguide.
In the illustrated embodiment, the scanning fiber section 132 includes a transparent core 136 and a cylindrical cladding 138a surrounding the transparent core 136. Cladding 138a has a circularly symmetric index of refraction such that the beam propagates through scanning fiber portion 132 without maintaining the linear polarization of the beam in the presence of external stress (see FIG. 7 a). In one embodiment, the transmission fiber section 134 is similar to the scanning fiber section 132 except that the transmission fiber section 134 includes a cladding 138b, which cladding 138b has a geometrically asymmetric cross-section (e.g., elliptical in this case), or a circularly asymmetric index of refraction to induce birefringence in the fiber to maintain linear polarization of the propagating beam even in the presence of external stress (see FIG. 7 b). In another embodiment, the transmission fiber section 134 is similar to the scanning fiber section 132 except that the transmission fiber section 134 further includes an additional element 140, the additional element 140 being composed of a different material than the cladding 136a so as to permanently induce stress in the fiber (see fig. 7 c).
In the illustrated embodiment, scanning fiber section 132 is relatively short and fixed within scanning device 114, while transmission fiber section 134 is relatively long and routed from the respective light source 112 to scanning device 114. Accordingly, the transmission fiber sections 134 are optically coupled to the respective light sources 112, and the scanning fiber sections 132 are coupled to the drive assembly 120 of the scanning device 114. The scanning fiber section 132 and the transmission fiber section 134 are spliced together in any suitable manner that minimizes cladding laser transmission modes.
Since the scanning fiber section 132 is fixed, for example, within the piezoelectric tube 122 of the scanning device 114, stress on the scanning fiber section 132 is prevented, thereby maintaining the linear polarization of the light beam propagating through the scanning fiber section 132. Likewise, although a bending force is applied to the transmission fiber part 134, the linear polarization of the light beam propagating through the transmission fiber part 134 is maintained.
Thus, display subsystem 104 generates a series of composite image frames of pixel information that present undistorted images of one or more virtual objects to a user. For example, referring to fig. 8, a composite image frame 200 is schematically shown in which cells 202a to 202m are divided into horizontal rows or lines 204a to 204 n. Each cell 202 of the frame 200 may specify a value and/or specify an intensity for each of a plurality of colors for the respective pixel to which the cell 202 corresponds. For example, frame 200 may specify one or more values for red 206a, one or more values for green 206b, and one or more values for blue 206c for each pixel. The value 206 may be specified as a binary representation for each color, e.g., a respective 4-bit number for each color. Each cell 202 of the frame 200 may additionally include a value 206d specifying an amplitude.
The frame 200 may include one or more fields (fields), collectively referred to as 208. Frame 200 may be comprised of a single field. Alternatively, the frame 200 may include two or even more fields 208 a-208 b. The pixel information of the complete first field 208a of the frame 200 may be specified before the pixel information of the complete second field 208b, e.g., appearing in an array, ordered list, or other data structure (e.g., record, linked list) before the pixel information of the second field 208 b. Assuming that the rendering subsystem is configured to process more than two fields 208 a-208 b, a third field and even a fourth field may follow the second field 208 b.
Further details describing Display subsystems are provided in U.S. patent application Ser. No. 14/212,961 entitled Display Subsystem and Method and U.S. patent application Ser. No. 14/696,347 entitled Primary Waveguide Apparatus With Diffraction elements and Subsystem Employing the Same, both of which are expressly incorporated herein by reference.
Referring back to fig. 2, virtual image generation system 100 further includes one or more sensors (not shown) mounted to frame structure 102 for detecting the position and movement of head 54 of end user 50 and/or the eye position and inter-ocular distance of end user 50. Such sensors may include image capture devices (such as cameras), microphones, inertial measurement units, accelerometers, compasses, GPS units, radios, and/or gyroscopes.
For example, in one embodiment, the virtual image generation system 100 includes a head-mounted transducer subsystem 142, the transducer subsystem 142 including one or more inertial transducers to capture inertial measurements indicative of motion of the head 54 of the end user 50. Such devices may be used to sense, measure, or collect information about head movements of end user 50. For example, such devices may be used to detect measured movements, velocities, accelerations, and/or positions of the head 54 of the end user 50.
Virtual image generation system 100 further includes one or more forward-facing cameras 144, forward-facing cameras 144 operable to capture information about the environment in which end user 50 is located. Forward facing camera 144 may be used to capture information indicative of the distance and orientation of end user 50 relative to the environment and particular objects in the environment. When worn on the head, forward-facing camera 144 is particularly adapted to capture information indicative of the distance and orientation of head 54 of end user 50 relative to the environment in which end user 50 is located and to particular objects in that environment. The forward facing camera 144 may be used to detect head motion, velocity of head motion, and/or acceleration, for example. For example, forward-facing camera 144 may be used to detect or infer a center of attention of end user 50, e.g., based at least in part on an orientation of head 54 of end user 50. Orientation in any direction (e.g., up/down, left, right relative to the end user's 50 frame of reference) may be detected.
The virtual image generation system 100 further includes a pair of rear facing cameras 146 to track the movement, glints, and depth of focus of the eyes 52 of the end user 50. Such eye tracking information may be discerned, for example, by projecting light onto an end user's eye and detecting the return or reflection of at least some of the projected light. Further details discussing tracking devices are provided in U.S. patent application Ser. No. 14/212,961 entitled Display System and Method, U.S. patent application Ser. No. 14/726,429 entitled Methods and System for Creating Focal Planes in Virtual and Augmented Reality, and U.S. patent application Ser. No. 14/205,126 entitled System and Method for Augmented and Virtual Reality, which are expressly incorporated herein by reference.
The virtual image generation system 100 further includes a user orientation detection module 148. User orientation module 148 detects the instantaneous position of end user's 50 head 54 and may predict the position of end user's 50 head 54 based on position data received from the sensors. User orientation module 148 also tracks eyes 52 of end user 50 based on tracking data received from the sensors.
The virtual image generation system 100 further includes a control subsystem that may take any of a variety of forms. The control subsystem includes a plurality of controllers, such as one or more microcontrollers, microprocessors or Central Processing Units (CPUs), digital signal processors, Graphics Processing Units (GPUs), other integrated circuit controllers, such as Application Specific Integrated Circuits (ASICs), Programmable Gate Arrays (PGAs) (e.g., field PGA (fpgas)), and/or programmable logic controllers (PLUs).
In the illustrated embodiment, virtual image generation system 100 includes a Central Processing Unit (CPU)150, a Graphics Processing Unit (GPU)152, and one or more frame buffers 154. The CPU 150 controls overall operation, while the GPU 152 renders frames (i.e., converts a three-dimensional scene into a two-dimensional image) and stores the frames in a frame buffer 154. Although not shown, one or more additional integrated circuits may control the reading of frames into and/or out of frame buffer 154, as well as the operation of the scanning device of display subsystem 104. Reading into and/or reading out of frame buffer 154 may employ dynamic addressing in the case of over-rendered frames, for example. The virtual image generation system 100 further includes Read Only Memory (ROM)156 and Random Access Memory (RAM) 158. The virtual image generation system 100 further includes a three-dimensional database 160, from which the GPU 152 may access three-dimensional data for rendering one or more scenes of a frame.
The various processing components of the virtual image generation system 100 may be physically contained in a distributed subsystem. For example, as shown in fig. 9 a-9 d, the virtual image generation system 100 includes a local processing and data module 170, the local processing and data module 170 operatively coupled to a portion of the display subsystem 104 (including the display screen 110 and the mechanical drive assembly 120) and the sensors, such as by wired leads or a wireless connection 172. The light source 116 and the drive electronics 124 are contained in a local processing and data module 170, in which case the connection 172 would include the optical fiber 118. The local processing and data module 172 may be mounted in various configurations, such as fixedly attached to the frame structure 102 (fig. 9a), fixedly attached to a helmet or hat 56 (fig. 9b), embedded in headphones, removably attached to the torso 58 of the end user 50 (fig. 9c), or removably attached to the hip 60 of the end user 50 in a belt-coupled configuration (fig. 9 d). The virtual image generation system 100 further includes a remote processing module 174 and a remote data repository 176, the remote processing module 174 and the remote data repository 176 being operatively coupled to the local processing and data module 170, such as by wired leads or wireless connections 178, 180, such that these remote modules 174, 176 are operatively coupled to each other and available as resources to the local processing and data module 170.
The local processing and data module 170 may include a power efficient processor or controller, as well as digital memory, such as flash memory, both of which may be used to assist in processing, caching, and storing data captured from the sensors, and/or data acquired and/or processed using the remote processing module 174 and/or the remote data repository 176, possibly communicated to the display subsystem 104 after such processing or retrieval. The remote processing module 174 may include one or more relatively powerful processors or controllers configured to analyze and process data and/or image information. The remote data repository 176 may include a relatively large-scale digital data storage facility, which may be available through the internet or other network configurations in a "cloud" resource configuration. In one embodiment, all data is stored and all calculations are performed in the local processing and data module 170, allowing for fully autonomous use from any remote module.
The couplings 172, 178, 180 between the various components described above may include one or more wired interfaces or ports for providing wired or optical communication, or one or more wireless interfaces or ports for providing wireless communication, such as via RF, microwave, and IR. In some embodiments, all communications may be wired, while in other embodiments, all communications may be wireless, with the exception of optical fiber 118. In still further embodiments, the choice of wired and wireless communication may be different from that shown in fig. 9a to 9 d. Thus, this particular choice of wired or wireless communication should not be considered limiting.
In the illustrated embodiment, the light sources 116 and drive electronics 104 of the display subsystem 104 are contained in a local processing and data module 170, in which case the connection 175 will include optical fibers 118, the optical fibers 118 being used to connect these components to a mechanical drive assembly 120 positioned in close association with the display screen 110. User orientation module 148 is contained within local processing and data module 170 while CPU 150 and GPU 152 are contained within remote processing module 174, but in alternative embodiments CPU 150, GPU 124, or portions thereof may be contained within local processing and data module 170. Three-dimensional database 160 may be associated with remote data repository 176.
While particular embodiments of the present invention have been shown and described, it will be understood that it is not intended to limit the invention to the preferred embodiments, and it will be obvious to those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention. Accordingly, the invention is intended to cover alternatives, modifications and equivalents, which may be included within the spirit and scope of the invention as defined by the appended claims.
- 上一篇:石墨接头机器人自动装卡簧、装栓机
- 下一篇:AR显示装置和可穿戴装置