![]() method to generate a virtual image of a virtual environment and virtual reality system
专利摘要:
SYSTEM AND METHOD FOR VIRTUAL ENGINEERING. The present invention relates to a method and a system for generating a virtual image (140) of a virtual environment (102). A virtual reality manager (106) receives hand position data (133) for at least one hand from a user (104) from a hand system (131). The virtual reality manager (106) receives head position data (120) for a head (112) from the user (104) from a head mounted system (108). The virtual reality manager (106) identifies image-based position data (328) and a current frame of reference (330) for a current time (314) using a target image corresponding to the current time (314). The virtual reality manager (106) generates virtual image control data (135) for the current time (314) using hand position data (133), head position data (120), data based positions (328) and the current reference frame (330). The virtual image control data (135) is configured for use by a virtual image application (137). 公开号:BR102013007952B1 申请号:R102013007952-9 申请日:2013-04-02 公开日:2020-10-20 发明作者:Andrew S. Roth 申请人:The Boeing Company; IPC主号:
专利说明:
BACKGROUND [001] The present invention generally refers to virtual reality and, in particular, user interaction with a virtual environment. Even more particularly, the present disclosure refers to a method and system for increasing a level of hand-eye coordination when a user interacts with a virtual environment. [002] Virtual reality (VR) is a technology that can be used to simulate a real environment or an imaginary environment in the form of a virtual environment. A virtual environment is a computer-simulated environment that can simulate physical presence in a real environment or an imaginary environment. Typically, a virtual environment is visually presented to a user as a two-dimensional environment or a three-dimensional environment. In some cases, sensory information, such as sound, can be presented to the user in addition to the visual presentation of the virtual environment. [003] Different types of virtual reality systems can provide different levels of immersion for a user. As used in the present invention, the "immersion" level provided by a virtual reality system is the extent to which a user feels present in the virtual environment created by the virtual reality system. A user can be completely immersed in a virtual environment when the user experiences both perceptual and cognitive immersion. [004] A user can experience perceptual immersion when the user has a feeling of being physically present in the virtual environment. For example, the user may feel that their hands are present in the virtual environment. A user can experience cognitive immersion when the user has a feeling that what is happening in the virtual environment is actually happening. In other words, the user's mind can be immersed in the virtual environment. [005] Additionally, when a user is cognitively immersed in a virtual environment, the user's actions can have effects in the virtual environment, and events in the virtual environment can affect the user's sensations. When the user is cognitively immersed in the virtual environment, the user can accept that the effects on the virtual environment and the effects on the user's sensations are actually occurring. [006] Some virtual reality systems currently available may be unable to provide a desired level of immersion. For example, some virtual reality systems currently available may be unable to simulate human mobility within a virtual environment with a desired level of accuracy. In particular, these virtual reality systems may be unable to simulate the rotation of human skeletal components over articulated joints with a desired level of precision without increasing the size and / or weight of virtual reality systems more than desired. [007] For example, a gyroscope may be able to measure the rotation of a human skeletal component around an articulated joint. However, the setting for the gyroscope required to measure this type of rotation may be larger and / or heavier than desired. Additionally, this configuration for the gyroscope can exert unwanted forces on the part of the human body to which the gyroscope is attached. For example, a gyroscope attached to a hand can exert unwanted forces on the hand when measuring hand movement. These unwanted forces can cause the hand to move in an unwanted way. [008] Some virtual reality systems currently available may use displacement estimation techniques to track the movement, for example, of a user's fingers. However, these displacement estimation techniques can track the movement of the user's fingers with less than desired accuracy. [009] In this way, some of the virtual reality systems currently available may be unable to simulate the movement of the human body and / or parts of the human body with a desired level of precision. Consequently, these currently available virtual reality systems may be unable to provide a user with the level of hand-eye coordination needed to allow the user to feel a desired level of immersion within the virtual environment. [0010] Without the level of hand-eye coordination necessary to provide the desired level of immersion within the virtual environment, a user may be unable to perform certain tasks within the virtual environment as quickly as efficiently as desired. Therefore, it would be desirable to have a system and method that considers at least some of the issues discussed above, as well as other possible issues. SUMMARY [0011] In an illustrative mode, a method is provided to generate a virtual image of a virtual environment. A virtual reality manager receives hand position data for at least one hand from a user of a handheld system. The virtual reality manager receives head-to-head position data from the user from a head-mounted system. The virtual reality manager identifies image-based position data and a current frame of reference for a current time using a target image corresponding to the current time. The virtual reality manager generates virtual image control data for the current time using hand position data, head position data, image-based position data and the current frame of reference. The virtual image control data is configured for use by a virtual image application. [0012] In another illustrative modality, a virtual reality system comprises a head-mounted system and a virtual reality manager. The head-mounted system is configured to be worn in relation to a user's head. The virtual reality manager is associated with the head-mounted system. The virtual reality manager is configured to receive hand position data for at least one hand from the user of a handheld system. The virtual reality manager is configured to receive head-to-head position data from the user of a sensor system in the head-mounted system. The virtual reality manager is configured to identify image-based position data and a current frame of reference for a current time using a target image corresponding to the current time. The virtual reality manager is configured to generate virtual image control data for the current time using hand position data, head position data, image based position data and the current frame of reference. The virtual image control data is configured for use by a virtual image application. [0013] In yet another illustrative embodiment, a computer comprises a bus, a non-transitory storage device connected to the bus and a processor unit connected to the bus. The non-transitory storage device includes program code. The processor unit is configured to execute the program code to receive hand position data for at least one hand from a user of a hand system, to receive head-to-head position data from the user of a head mounted system , identify image-based position data and a current frame of reference for a current time using a target image corresponding to the current time, and generate virtual image control data for the current time using the position data of head position data, image-based position data and the current frame of reference. The virtual image control data is configured for use by a virtual image application. [0014] The features and functions can be achieved independently in various modalities of the present disclosure or can be combined in still other modalities in which additional details can be seen in reference to the following description and drawings. BRIEF DESCRIPTION OF THE DRAWINGS [0015] The innovative features and characteristics of the illustrative modalities are presented in the attached claims. The illustrative modalities, however, as well as a preferred mode of use, additional objectives and resources thereof, will be better understood in reference to the following detailed description of an illustrative modality of the present disclosure when read in conjunction with the attached drawings, in which: [0016] Figure 1 is an illustration of a virtual reality system in the form of a block diagram according to an illustrative modality; [0017] Figure 2 is an illustration of a hand system in the form of a block diagram according to an illustrative embodiment; [0018] Figure 3 is an illustration of a head mounted system in the form of a block diagram according to an illustrative modality; [0019] Figure 4 is an illustration of a data coordinator in the form of a block diagram according to an illustrative modality; [0020] Figure 5 is an illustration of modes of operation for a virtual reality system in the form of a block diagram according to an illustrative modality; [0021] Figure 6 is an illustration of a user using a virtual reality system according to an illustrative modality; [0022] Figure 7 is an illustration of a process for interacting with a virtual engineering environment in the form of a flowchart according to an illustrative modality; [0023] Figure 8 is an illustration of a process for interacting with a virtual engineering environment in the form of a flowchart according to an illustrative modality; and [0024] Figure 9 is an illustration of a data processing system according to an illustrative modality. DETAILED DESCRIPTION [0025] The different illustrative modalities recognize and consider different considerations. For example, the different illustrative modalities recognize and consider that a virtual environment can be useful to perform different types of engineering tasks. [0026] These engineering tasks may include, for example, without limitation, designing a vehicle, managing data for a product design, testing a structure for use on an aircraft, testing the operation of a configuration for an antenna system, inspecting a system, perform maintenance on a structure, control operations of a vehicle, control manufacturing equipment in a manufacturing facility, control an outdoor structure and other suitable types of engineering tasks. Other types of engineering tasks include, for example, without limitation, interacting with a computer program, operating an electromechanical device located in an inaccessible environment, operating a mobile platform in climatic and / or temperature conditions, extreme and other types engineering tasks. [0027] As an illustrative example, a virtual environment that simulates a test environment that has selected test conditions can be used to test the operation of a particular configuration for a component for a vehicle in those selected conditions. In this illustrative example, a model of the particular configuration for the component is introduced in the virtual environment. [0028] Testing the operation of the particular configuration for the component using the model in the virtual environment can be less expensive, less time consuming and / or more efficient than testing the real component in a real test environment. In addition, this type of virtual component testing may require fewer resources and / or staff compared to physical component testing. [0029] However, the different illustrative modalities recognize and consider that a user may be unable to use the virtual environment created by a virtual reality system to perform certain types of engineering tasks if the virtual reality system does not provide a desired level of hand-eye coordination. As used in the present invention, "hand-eye coordination" is the coordinated control of eye movement and hand movement. Hand-eye coordination is the use of visual input to guide hand movement and the use of hand proprioception to guide eye movement. [0030] Certain types of engineering tasks may require a higher level of hand-eye coordination compared to types of engineering tasks. As an illustrative example, operating a virtual aircraft that comprises different types of controls, keys, buttons and user interfaces may require a higher level of hand-eye coordination compared to pushing an open door in a virtual environment. [0031] The different illustrative modalities recognize and consider that the provision of a desired level of hand-eye coordination may require identifying the positions of the user's hands and the user's head in relation to a frame of reference for the virtual environment with a level desired accuracy. In addition, the different illustrative modalities recognize and consider that providing the desired level of hand-eye coordination may require simulating the movement of the user's hands and fingers within the virtual environment with a desired level of precision substantially in real time. As used in the present invention, "substantially in real time" means no time delays noticeable to the user. [0032] Thus, the different illustrative modalities provide a method and a system to increase a level of hand-eye coordination for a user when the user interacts with a virtual environment. In particular, the different illustrative modalities provide a virtual reality system configured to coordinate hand position data and head position data to the user and synchronize that data over time to provide the user with a desired level of hand-to-hand coordination. I look inside a virtual environment. [0033] Referring now to Figure 1, an illustration of a virtual reality system is revealed in the form of a block diagram according to an illustrative modality. In these illustrative examples, virtual reality system 100 is configured to visually present virtual environment 102 to user 104. Additionally, user 104 can interact with virtual environment 102 using virtual reality system 100. [0034] In these illustrative examples, virtual environment 102 is a simulation of environment 101. Environment 101 can be a real environment or an imaginary environment. For example, environment 101 can be a physical environment or an abstract environment. [0035] In an illustrative example, environment 101 adopts the form of engineering environment 103. Engineering environment 103 can be selected from among, for example, without limitation, design environment, manufacturing environment, work environment computer, test environment, data management environment, inspection environment, operations environment or some other suitable type of engineering environment. [0036] When virtual environment 102 is a simulation of engineering environment 103, user 104 can use virtual reality system 100 to interact with virtual environment 102 to perform one or more engineering tasks. For example, user 104 can use virtual reality system 100 to design an object, such as an aircraft or part of an aircraft, within virtual environment 102. [0037] In another illustrative example, user 104 can use virtual reality system 100 to perform tasks related to air traffic control. For example, virtual environment 102 can be a simulation of an airspace region. User 104 can use virtual reality system 100 to control the operation of and / or exchange information with an aircraft in that region of airspace with the use of an aircraft model in the virtual environment 102. [0038] Additionally, in yet another illustrative example, virtual environment 102 can be a simulation of a user interface for a computer program. User 104 can use virtual reality system 100 to interact with a virtual user interface to interact with the computer program. For example, the computer program can be a database management program. User 104 can use virtual reality system 100 to interact with a virtual user interface for that data management application. [0039] As revealed, the virtual reality system 100 comprises the virtual reality manager 106, the head mounted system 108, and numerous peripheral systems 110. In these illustrative examples, the virtual reality manager 106 can be deployed using hardware, software or a combination of the two. For example, virtual reality manager 106 can be deployed in data processing system 105. [0040] In these illustrative examples, data processing system 105 is associated with head mounted system 108. When a component is "associated" with another component, that association is a physical association in these examples. [0041] For example, a first component, such as data processing system 105, can be considered associated with a second component, such as head mounted system 108, which is attached to the second component, glued to the second component, mounted on the second component, welded on the second component, fixed on the second component, electrically connected to the second component and / or connected to the second component in some other suitable manner. The first component can also be connected to the second component using a third component. The first component can also be considered as associated with the second component being formed as part of and / or as an extension of the second component. [0042] In these illustrative examples, the data processing system 105 is considered part of the head mounted system 108. Obviously, in other illustrative examples, the virtual reality manager 106 can be deployed in a processor unit separate from the system mounted in head 108, but configured to communicate with the wireless head mounted 108 system. [0043] The head mounted system 108 is configured to be worn in relation to the head 112 of the user 104. For example, the head mounted system 108 can be worn on and / or the head 112 of the user 104. The system mounted in head 108 can take numerous different forms. For example, the head-mounted system 108 may take the form of a helmet, visor, hat, goggles, goggles or some other suitable type of device configured to be worn on and / or on the user's head 112 104. In an illustrative example, the head-mounted system 108 takes the form of glasses 114. [0044] Numerous different components can be associated with the head mounted system 108. For example, the display device 116 and the sensor system 118 can be associated with the head mounted system 108. When the head mounted system 108 adopts the shape of glasses 114, the display device 116 and the sensor system 118 can be attached to, parts of and / or otherwise associated with glasses 114. [0045] Virtual reality manager 106 is configured to visually present virtual environment 102 to user 104 on display device 116. In an illustrative example, display device 116 can be the lens on glasses 114. The virtual reality manager 106 can visually present the virtual environment 102 in these lenses in such a way that the virtual environment 102 is visually presented in front of the eyes of the user 104 when the user 104 is wearing the glasses 114. In this way, the user 104 can feel himself virtual environment 102. [0046] Obviously, in other illustrative examples, the display device 116 may take some other form. For example, display device 116 may comprise one or two contact lenses configured for use by user 104. [0047] The sensor system 118 can include one or more sensors. The sensors in the sensor system 118 can include, for example, without limitation, numerous microelectromechanical sensors (MEMS), nanoelectromechanical sensors (GEMS), motion sensors, angle sensors, speed sensors, acceleration sensors, position sensors, cameras , video cameras, image sensors and / or other suitable types of sensors. Head mounted system 108 is configured to generate head position data 120 using sensor system 118. [0048] In these illustrative examples, each peripheral system in numerous peripheral systems 110 in the virtual reality system 100 can be configured to generate and send data to the virtual reality manager 106. As used in the present invention, a "peripheral system" such as as one of numerous peripheral systems 110, it is a system that is configured to communicate with the head-mounted system 108, but which is not considered part of the head-mounted system 108. Additionally, as used in the present invention, "numerous" items means one or more items. For example, the numerous peripheral systems 110 can be one or more peripheral systems. [0049] In these illustrative examples, each peripheral system in numerous peripheral systems 110 can be configured to generate data about a particular body part of the user 104. For example, a peripheral system in numerous peripheral systems 110 can be configured to generate data about a user's hand, foot, arm, leg, torso, finger, toe or other body part 104. [0050] In an illustrative example, the numerous peripheral systems 110 include hand system 131. Hand system 131 is configured to generate hand position data 133. Hand position data 133 can form at least a portion of peripheral position data 132 generated by numerous peripheral systems 110. As used in the present invention, “at least one portion” means some or all. Peripheral position data 132 may include data on the position of numerous body parts for the user 104 over time. [0051] Hand position data 133 may include data on at least one of left hand 124 and right hand 126 of user 104. For example, hand system 131 may comprise at least one of left hand system 121 and hand system right hand 122. Left hand system 121 can generate data about user 104's left hand 124, while right hand system 122 can generate data about user 104's right hand 126. [0052] In these illustrative examples, the left hand system 121 can adopt the left glove shape 128 configured for use with the left hand 124 of the user 104, while the right hand system 122 can adopt the right glove shape 130 configured for use. use with the right hand 126 of the user 104. The left glove 128 and the right glove 130 can be gloves configured to generate data on the position of the left hand 124 and the right hand 126, respectively, over time. [0053] In particular, the left glove 128 generates left hand position data 134 for the left hand 124, while the right glove 130 generates right hand position data 136 for the right hand 126 when user 104 is wearing the glove left 128 and right glove 130, respectively. Left-hand position data 134 and right-hand position data 136 may include data on the position of the left hand 124 and the right hand 126, respectively, over time, as well as data on the positions of the wrists and fingers corresponding to the hands over time. [0054] The virtual reality manager 106 is configured to receive peripheral position data 132 from numerous peripheral systems 110 and head position data 120 from sensor system 118 in the head mounted system 108. In particular, the virtual reality manager 106 receives hand position data 133 and head position data 120 substantially in real time. In other words, the virtual reality manager 106 receives the position data as the position data is generated without noticeable delays. [0055] In these illustrative examples, virtual reality manager 106 can use hand position data 133 and head position data 120 to generate virtual image control data 135. Virtual image control data 135 can be configured data for use by the virtual image application 137 to control a virtual image from the virtual environment 102. [0056] For example, virtual image application 137 is computer software configured to create virtual images of virtual environment 102. As used in the present invention, a "virtual image" of a virtual environment, such as virtual environment 102, is an image of at least a portion of the virtual environment. In some cases, virtual image application 137 can be configured to create virtual environment 102. In other cases, virtual image application 137 can be configured to use virtual environment 102 created by another application. [0057] The virtual reality manager 106 can be configured to communicate with the virtual image application 137 with the use, for example, without limitation, of cloud 139. Cloud 139 can be a network comprised of applications, computer programs , devices, servers, client computers and / or other computing components connected to each other. The virtual reality manager 106 can be configured to access the cloud 139 and, thereby, the virtual image application 137, using, for example, without limitation, the Internet. [0058] In these illustrative examples, the virtual image application 137 is configured to generate a sequence of virtual images 138 of a virtual environment 102. As used in the present invention, a "sequence of images", such as the sequence of virtual images 138, is one or more images ordered in relation to time. In this way, each virtual image in the sequence of virtual images 138 corresponds to a particular time. In some illustrative examples, the sequence of virtual images 138 can also be called the sequence of virtual frames. [0059] The virtual image application 137 generates each virtual image in the sequence of virtual images 138 using virtual image control data 135 generated by virtual reality manager 106. For example, virtual image application 137 can use data virtual image control 135 to update a previously generated virtual image from virtual environment 102. This updated virtual image can be sent back to virtual reality manager 106. Virtual reality manager 106 can then display that image updated virtual display device 116. [0060] In this way, the virtual reality manager 106 can display the virtual images in the sequence of virtual images 138 generated by the virtual image application 137 on the display device 116 as the virtual images are received from the virtual image application 137. Virtual reality manager 106 displays sequence of virtual images 138 on display device 116 to visually present virtual environment 102 to user 104 on display device 116. [0061] Virtual image 140 is an example of a virtual image in the sequence of virtual images 138. Virtual image 140 can be a two-dimensional image or a three-dimensional image, depending on the implementation. Virtual image 140 corresponds to time 141. Virtual image 140 is generated using the hand position data portion 133 and the head position data portion 120 generated within time 141. Time 141 can be, for example, example, an instant in time or a period of time. [0062] In particular, the hand position data portion 133 and the head position data portion 120 generated within time 141 are used to form virtual image control data 135 corresponding to time 141. The control data virtual image 135 corresponding to time 141 can be used by virtual image application 137 to form virtual image 140. [0063] In addition, virtual reality manager 106 can use head position data 120 and / or other data generated by sensor system 118 to identify reference frame 146 for virtual image 140. Reference frame 146 is a coordinate system for virtual image 140 in relation to virtual environment 102. In particular, reference frame 146 is the portion of virtual environment 102 captured within virtual image 140. Reference frame 146 is identified based on position of user 104's head 112. In other words, frame 146 is identified based on the direction in which user 104's head 112 is pointing. [0064] In these illustrative examples, virtual reality manager 106 is configured to generate virtual image control data 135 using reference frame 146 and by coordinating hand position data 133 and head position data 120 Virtual reality manager 106 can coordinate hand position data 133 and head position data 120 by synchronizing hand position data 133 and head position data 120 over time. This type of synchronization can increase a level of hand-eye coordination provided to the user 104. [0065] For example, the synchronization of hand position data 133 and head position data 120 with respect to time 141 may result in virtual image control data 135 that has a desired level of accuracy. When the virtual image control data 135 has the desired level of precision, the sequence of virtual images 138 of the virtual environment 102 generated by the virtual image application 137 can represent a substantially real-time simulation of the movement and presence of the user 104 within of the virtual environment 102. [0066] Additionally, virtual reality manager 106 can be configured to display the sequence of virtual images 138 substantially in real time. By allowing the movement of user 104 and the presence of user 104 within virtual environment 102 to be simulated in virtual environment 102 substantially in real time, virtual reality manager 106 can allow user 104 to perform one or more tasks within the 102 virtual environment with a desired level of accuracy, speed and efficiency. [0067] In these illustrative examples, virtual image 140 includes at least one among virtual left hand 142 and virtual right hand 144. Virtual left hand 142 can be an image representing user 124's left hand 104 in virtual image 140. A virtual right hand 144 can be an image that represents user right hand 126 in virtual image 140. virtual image application 137 can determine positions for virtual left hand 142 and virtual right hand 144 within the virtual image 140 using virtual image control data 135. [0068] Additionally, in these illustrative examples, virtual image control data 135 can also be generated using user data 145. User data 145 can include data about user 104. In particular, user data 145 may include data on the geometry of one or more user body parts 104. [0069] For example, user data 145 may include data on at least one of the left hand 124, the right hand 126 and the head 112 of the user 104. The data on a hand of the user 104, such as the hand left hand 124 or right hand 126, may include, for example, without limitation, measurements of hand dimensions, pulse measurements corresponding to the hand, measurements of the fingers on the hand, measurements of a range of motion for one or more of the fingers on the hand , measurements of the distance between the fingers on the hand at rest and / or in movement and / or other suitable types of data. [0070] User data 145 can be identified using a set of user images 150 generated by the imaging system 152. As used in the present invention, a "set of" items means one or more items. For example, the set of user images 150 means one or more user images. [0071] The imaging system 152 is not part of the virtual reality system 100 in these illustrative examples. In an illustrative example, the imaging system 152 can be a three-dimensional laser scanning system. In this illustrative example, the user image set 150 is a set of three-dimensional laser scans of user 104 that captures the geometry of the left hand 124 and the right hand 126 of user 104. In addition, this set of three-dimensional laser scans can also capture the geometry of the user's wrists and fingers 104. [0072] In an illustrative example, virtual reality manager 106 can be configured to receive the set of user images 150 and identify user data 145 using the set of user images 150. In another illustrative example, user data 145 can be uploaded directly to virtual reality manager 106. In yet another illustrative example, user data 145 can be the set of user images 150. [0073] When virtual image 140 is displayed on display device 116 for user 104, user 104 sees virtual left hand 142 and virtual right hand 144. In particular, user 104 sees virtual left hand 142 and virtual right hand 144 in virtual image 140 in positions that correspond to the actual positions of the left hand 124 and the right hand 126, respectively. In this way, user 104 can feel himself present in virtual environment 102. [0074] In some illustrative examples, numerous peripheral systems 110 and / or sensor system 118 in the head-mounted system 108 can generate additional data 154 in addition to the position data. Additional data 154 can be used to generate virtual image control data 135. For example, additional data 154 can be used to generate data that can be used by virtual image application 137 to control one or more interactive controls. in the virtual environment 102. [0075] In an illustrative example, additional data 154 may include pressure data for fingers on the left hand 124 and on the right hand 126. In other illustrative examples, additional data 154 may include image data. Of course, in still other illustrative examples, additional data 154 may include other types of sensor data. [0076] In some illustrative examples, the virtual reality manager. 106 can also be configured to communicate with the group of virtual reality systems 156 using the cloud 139. The virtual reality system in the group of virtual reality systems 156 can comprise a virtual reality manager that can be deployed from one similar to virtual reality manager 106. Cloud 139 can allow different virtual reality managers to interact with virtual image application 137 at the same time. In some cases, cloud 139 may allow different users to interact with each other within virtual environment 102. [0077] Referring now to Figure 2, an illustration of a left hand system in the form of a block diagram is shown according to an illustrative embodiment. In Figure 2, an example of an implementation for the left hand system 121 in Figure 1 is revealed. The configuration revealed for the left hand system 121 can also be used to implement the right hand system 122 in Figure 1. [0078] As revealed, the left hand system 121 adopts the form of left glove 128. Left glove 128 is configured to substantially conform to a left hand 124 of user 104 in Figure 1. As an illustrative example, the left glove 128 can be manufactured based on a set of user images, such as, for example, set of user images 150 in Figure 1, and / or user data, such as user data 145 in Figure 1. The set of user images user images 150 can be used to manufacture a glove that conforms substantially to the left hand 124 of user 104. For example, left glove 128 can be manufactured using a material configured to conform substantially to a left hand 124 of the user 104. [0079] Additionally, the left glove 128 can be manufactured with dimensions that substantially correspond to the geometry of the left hand 124. [0080] In this illustrative example, the left sleeve 128 comprises sensor system 202, data manager 204 and communications unit 206. Sensor system 202 comprises numerous sensors selected from a group comprising microelectromechanical sensors, nanoelectromechanical sensors, sensors of motion, angle sensors, position sensors, speed sensors, acceleration sensors, cameras, video cameras, image sensors, pressure sensors, tactile sensors and / or other suitable types of sensors. [0081] The sensor system 202 can be integrated into the material used to manufacture the left sleeve 128 in some illustrative examples. For example, one or more sensors in the sensor system 202 can be integrated into the material used to manufacture the left sleeve 128 in such a way that these sensors may not be visible to the user 104. The position of each sensor in the sensor system 202 in The relation to the left glove 128 can be determined using, for example, without limitation, the set of user images 150 and / or user data 145 in Figure 1. [0082] As disclosed, sensor system 202 is configured to generate raw hand position data 210 for user 104's left hand 124. Raw hand position data 210 may comprise, for example, without limitation, a set dynamic system state variables (DSSV). [0083] In some illustrative examples, sensor system 202 can be calibrated using the set of user images 150 and / or user data 145. As an illustrative example, these images can be used to impose restrictions on data from gross hand position 210 generated by sensor system 202. [0084] The data manager 204 may comprise hardware, software or a combination of the two. For example, data manager 204 can be deployed within data processing system 205. In some illustrative examples, data processing system 205 can be deployed within sensor system 202. In other illustrative examples, the data processing system Data processing 205 can be deployed within the left sleeve 128 separate from the sensor system 202. [0085] In yet other illustrative examples, the data processing system 205 can be associated with the left sleeve 128 in some other suitable way. For example, without limitation, the data processing system 205 may be a wristwatch-type device associated with the left glove 128. [0086] Data manager 204 is configured to modify raw hand position data 210 to form left hand position data 134. Data manager 204 can modify raw hand position data 210 using the set of filters 212. The filter set 212 may include, for example, without limitation, numerous motion smoothing filters, jitter filters, Kalman filters and / or other suitable types of filters. In addition, data manager 204 can also modify the raw hand position data 210 based on restrictions identified for the left hand movement 124 of user 104 based on the set of user images 150 in Figure 1. [0087] In this way, left hand position data 134 can compensate for unwanted movement of one or more sensors in sensor system 202 while raw hand position data 210 is being generated. Such unwanted movement may include, for example, agitation, vibrations, instability, and / or other suitable types of unwanted movement. [0088] In some cases, left hand position data 134 can also compensate for inaccurate gestures and unwanted left hand movement 124 of user 104. For example, left hand position data 134 can compensate for unwanted spikes in the left hand movement 124 in response to a type of left hand movement jerk 124. In these illustrative examples, modifying the raw hand position data 210 to form the left hand position data 134 can be called stabilization of raw hand position data 210. [0089] Data manager 204 sends left hand position data 134 to virtual reality manager 106 in Figure 1 using communications unit 206. Communications unit 206 is used to form one or more communications links between the left sleeve 128 and another peripheral system and / or head mounted system 108. As an illustrative example, the communications unit 206 forms the wireless communications link 214 with the head mounted system 108. The communications unit 206 can transmit the left hand position data 134 to the head mounted system 108 in Figure 1 over the wireless communications link 214 using, for example, without limitation, radio frequency (RF) signals. [0090] In some illustrative examples, data manager 204 can also be used to calibrate sensor system 202. In other illustrative examples, data manager 204 can be configured to modify raw hand position data 210 to compensate electronically for mounting issues with the 202 sensor system. [0091] Turning now to Figure 3, an illustration of a head mounted system in the form of a block diagram is revealed according to an illustrative embodiment. In Figure 3, an example of an implementation for head mounted system 108 in Figure 1 is revealed. [0092] In this illustrative example, the head-mounted system 108 adopts the form of glasses 114. As revealed, the virtual reality manager 106, the display device 116, the sensor system 118, the data manager 302 and the unit communications devices 300 are associated with glasses 114. In particular, virtual reality manager 106, display device 116, sensor system 118, data manager 302 and communications unit 300 are parts of glasses 114 in this example. [0093] The communications unit 300 is configured to form numerous wireless communications links to allow communications between the head-mounted system 108 and numerous other systems, such as, for example, numerous peripheral systems 110 in Figure 1. For example , the head mounted system 108 can receive peripheral position data 132 from numerous peripheral systems 110 in Figure 1 using the communications unit 300. [0094] In particular, the virtual reality manager 106 receives the left hand position data 134 from the left hand system 121 in Figure 1 and the right hand position data 136 from the right hand system 122 in Figure 1 with the use of communications unit 300. For example, communications unit 300 can receive left-hand position data 134 from left sleeve 128 using the wireless communications link 214 established between communications unit 300 and the communication unit communications 206 of the left sleeve 128 in Figure 2. [0095] As disclosed, sensor system 118 is configured to generate raw head position data 304. Raw head position data 304 can comprise, for example, without limitation, a set of dynamic system state variables ( DSSV). In some illustrative examples, sensor system 118 can be calibrated using user image set 150 in Figure 1. [0096] In these illustrative examples, data manager 302 can be deployed using hardware, software or a combination of the two. In an illustrative example, data manager 302 can be deployed within data processing system 305. In some cases, data processing system 305 can be associated with data processing system 105 in Figure 1. In other cases , the data processing system 305 can be associated with a sensor system 118. In some illustrative examples, data manager 302 can be deployed within the data processing system 105 in Figure 1 instead of the data processing system 305. [0097] Data manager 302 is configured to modify raw head position data 304 to form head position data 120 in Figure 1. Data manager 302 can modify raw head position data 304 using filter set 308. The filter set 308 may include, for example, without limitation, numerous motion smoothing filters, jitter filters, Kalman filters and / or other suitable types of filters. In addition, data manager 302 can also modify raw head position data 304 based on restrictions identified for user head movement 112 using user image set 150 in Figure 1. [0098] In this way, head position data 120 can be used to compensate for unwanted movement of one or more sensors in sensor system 118 while raw head position data 304 is being generated. Such unwanted movement may include, for example, agitation, vibrations, instability, and / or other suitable types of unwanted movement. [0099] Additionally, in some cases, head position data 120 can also be used to compensate for unwanted head movement 112 from user 104. For example, head position data 120 can compensate for unwanted spikes in head movement 112 in response to an unwanted jerk type of head movement 112. In these illustrative examples, the modification of raw head position data 304 to form head position data 120 can be called head position data stabilization raw 304. Data manager 302 sends head position data 120 to virtual reality manager 106. [00100] In some illustrative examples, data manager 302 can also be used to calibrate sensor system 118. In other illustrative examples, data manager 302 can be configured to modify raw head position data 304 to compensate electronically mounting issues with the 118 sensor system. [00101] Additionally, the sensor system 118 includes the imaging system 322 in this illustrative example. The imaging system 322 comprises one or more cameras configured to point in a substantially equal direction in which the user's head 112 is pointed. The 322 imaging system is configured to have a field of view that is wide enough and deep enough to capture numerous visual targets on numerous peripheral systems 110. These visual targets may include, for example, visual markers, labels, buttons , outlines, shapes and / or other suitable types of visual targets. [00102] As an illustrative example, numerous visual markers can be present in each of the left glove 128 and the right glove 130 in Figure 1. Imaging system 322 is configured to have a wide enough and deep field of view o enough to capture these visual markers in a target image generated by the 322 imaging system. [00103] In this illustrative example, the target image 324 is an example of one of the target image sequence 320. The target image 324 is generated by the imaging system 322 at the current time 314. The image processor 326 in the virtual reality manager 106 is configured to generate image-based position data 328 using target image 324. Image-based position data 328 identifies the positions of the left glove 128 and the right glove 130 within the target image 324 at the current time 314. These positions are identified using visual targets on the left glove 128 and the right glove 130. [00104] In addition, image processor 326 is also configured to use target image 324 to identify the current reference frame 330 for the current virtual image 310. The current reference frame 330 can be, for example, the portion of the virtual environment 102 corresponding to target image 324. For example, a first target image that captures an area to the left of an area captured in a second target image corresponds to a portion of virtual environment 102 that is to the left of the portion of virtual environment 102 corresponding to second target image. [00105] The image processor 326 is configured to send the current reference frame 330 and image-based position data 328 to the data coordinator 336. The data coordinator 336 uses the current reference frame 330, the data from image-based position 328, head position data 120, left hand position data 134 and right hand position data 136 to generate virtual image control data 135 for the current time 314. In particular, the data coordinator 336 uses the head position data 120, the left hand position data 134 and the right hand position data 136 generated at the current time 314 to form the virtual image control data 135 for the current time 314 . [00106] The virtual image control data 135 can include, for example, a position for the left glove 128 and the right glove 130 in Figure 1 in relation to the current reference frame 330 that can be used to control, for example, a position of a virtual left hand and a virtual right hand within a virtual image of virtual environment 102. In illustrative examples, data coordinator 336 sends virtual image control data 135 to virtual image application 137 for processing. [00107] The virtual image application 137 may include the image generator 338. The image generator 338 uses the virtual image control data 135 to form the current virtual image 310 of the virtual environment 102 for the current time 314. The image current virtual image 310 is based on the current frame of reference 330. The current virtual image 310 can be the virtual image that is formed for display on the display device 116. In other words, the current virtual image 310 that has not yet been displayed on the display device display 116 in this example. [00108] In this example, the current virtual image 310 may be an updated version of the previous virtual image 312 that is currently displayed on the display device 116. The previous virtual image 312 may have been generated by the virtual image application 137 for the previous time 332. The current reference frame 330 used to form the current virtual image 310 can be the same or different from the previous reference frame 334 used to generate the previous virtual image 312. [00109] As revealed, the current virtual image 310 includes the current virtual left hand 316 and the current virtual right hand 318. [00110] The positions for the current virtual left hand 316 and the current virtual right hand 318 can be based on the positions identified for the left glove 128 and the right glove 130 in the virtual image control data 135. [00111] Since the current virtual image 310 has been generated by the image generator 338, the virtual image application 137 sends the current virtual image 310 to the data coordinator 336. The data coordinator 336 sends the current virtual image 310 to the display device 116 such that the current virtual image 310 can be displayed on the display device 116 in place of the previous virtual image 312. [00112] In this way, the virtual reality manager 106 and the virtual image application 137 can communicate with each other to form the sequence of virtual images 138 in Figure 1 for a plurality of times that elapse for a particular period of time. The virtual images in the sequence of virtual images 138 can be displayed on the display device 116 as the images are generated substantially in real time such that the different positions for the virtual left hand and the virtual right hand in the virtual images for the period time simulate the movement of left hand 124 and right hand 126, respectively, of user 104 in Figure 1. [00113] In some illustrative examples, the head mounted system 108 can also include microphone 340 and / or speaker system 342. User 104 can use microphone 340 to generate audio data for use by the virtual reality manager 106 Additionally, the virtual reality manager 106 can be configured to generate sounds using the speaker system 342. These sounds can allow the user 104 to be additionally immersed in the virtual environment 102. The speaker system 342 can take the form of, for example, 344 headsets. [00114] Referring now to Figure 4, an illustration of a data coordinator in the form of a block diagram is revealed according to an illustrative embodiment. In Figure 4, an example of an implementation for data coordinator 336 in Figure 3 is revealed. [00115] In this illustrative example, data coordinator 336 is configured to receive input 402 for use in the current virtual image formation 310 for current time 314. In response to receipt of input 402, data coordinator 336 generates output 404 Entry 402 may include, for example, without limitation, user data 145, image-based position data 328, hand position data 133, head position data 120, current reference frame 330 and / or other data appropriate. The image-based position data 328, the hand position data 133, the head position data 120 and the current reference frame 330 can be data corresponding to the current time 314. Output 404 can be the current virtual image 310 . [00116] As illustrated in this Figure, data coordinator 336 includes constraint identifier 406, data modulator 408, feedback controller 410, virtual image analyzer 411, control data generator 412 and the viewer of data. image 414. In this illustrative example, constraint identifier 406 identifies constraint set 416 to form virtual image control data 135 based, for example, without limitation, on user data 145. Constraint set 416 may include, for example, restrictions on positions and / or movement of left hand 124, right hand 126 and / or head 112 of user 104 in Figure 1. Restriction identifier 406 sends constraint set 416 to feedback controller 410. [00117] As revealed, data modulator 408 includes hand data modulator 418 and head data modulator 420. Hand data modulator 418 is configured to use both 328 image-based position data and data hand position data 133 to form modified hand position data 422. Modified hand position data 422 can be formed by applying weights to image-based position data 328 and hand position data 133. These weights can be based, for example, on a movement speed of the left hand 124 and / or the right hand 126 of the user 104. [00118] In this illustrative example, image-based position data 328 can provide a more accurate position for a user's hand 104 than hand position data 133 generated using a sensor system on a hand glove over time. In particular, image-based position data 328 can provide more accurate data than hand position data 133 when the hand is moving slowly. Hand position data 133 can provide more accurate data than image-based position data 328 when the hand is moving rapidly. [00119] Consequently, when a user's hand 104 is moving fast enough, the hand data modulator 418 applies weights to the corresponding hand position data in the hand position data 133 which is greater than the weights applied to the data image-based position numbers 328. On the other hand, when a user's hand 104 is moving slow enough, the hand data modulator 418 applies weights to the corresponding hand position data in the hand position data 133 which is smaller than the weights applied to image-based position data 328. The speed at which weights are applied can vary, depending on the particular implementation. [00120] Additionally, the head data modulator 420 is configured to use both the head position data 120 and the current reference frame 330 to form the modified head position data 424. The head data modulator 420 can combine the head position data 120 and the current reference frame 330 and the weight of these two types of data to form the modified head position data 424. The current reference frame 330 can be a greater weight than the position data head 120 when user head 112 is moving slowly. Head position data 120 may be heavier than the current frame of reference 330 when user head 112 is moving rapidly. [00121] Data modulator 408 sends the modified hand position data 422 and the modified head position data 424 to the feedback controller 410. Additionally, data modulator 408 sends the modified hand position data 422 and the modified head position data 424 for the control data generator 412. The control data generator 412 is configured to use the modified hand position data 422 and the modified head position data 424 to form control data of virtual image 135. [00122] For example, the control data generator 412 can use the filter set 425 to form virtual image control data 135. The filter set 425 can be configured to smooth the modified hand position data 422 and the modified head position data 424, remove unwanted discontinuities in modified hand position data 422 and modified head position data 424, and synchronize modified hand position data 422 and modified head position data 424 in relation to time. The control data generator 412 sends the virtual image control data 135 to the virtual image application 137. [00123] The image generator 338 in the virtual image application 137 is configured to generate the current virtual image 310 with the use of virtual image control data 135. In particular, the image generator 338 updates the previous virtual image 312 to form the current virtual image 310 from virtual environment 102 in Figure 1. Imager 338 sends the current virtual image 310 to virtual image analyzer 411 and image viewer 414. [00124] In this illustrative example, the virtual image analyzer 411 is configured to analyze the current virtual image 310 to generate real data 427 based on the current virtual image 310. For example, the virtual image analyzer 411 can decompose the current virtual image 310 to generate real data 427 that identifies the real position for the head 112 of the user 104, the real position for the left hand 124 of the user 104 and / or the real position for the right hand 126 of the user 104 in Figure 1 based on the current virtual image 310. The virtual image analyzer 411 sends the actual data 427 to the feedback controller 410. [00125] Feedback controller 410 is configured to use actual data 427, constraint set 416, modified hand position data 422 and modified head position data 424 to form finger position error 426, the relative hand position error 428 and the head position error 430. The finger position error 426 may be the difference between the user's finger positions 104 identified in the modified hand position data 422 and the finger positions 104 simulated in the current virtual image 310. [00126] Additionally, the relative hand position error 428 may be the difference between the user's hand positions 104 identified in the modified hand position data 422 and the virtual hand positions in the current virtual image 310 in relation to the position of the head 112 of user 104 and / or the current reference frame 330. Head position error 430 may be the difference between the head position 112 of user 104 identified in the modified head position data 424 and the head position 112 User 104 indicated based on the current virtual image 310. [00127] In some illustrative examples, the feedback controller 410 may send the finger position error 426, the relative hand position error 428 and the head position error 430 to the control data generator 412 to adjust the virtual image control data 135. Obviously, in some illustrative examples, the process of sending virtual image control data 135 to the image generator 338 and the feedback controller 410 using the current virtual image 310 to adjust the virtual image control data 135 can be repeated until virtual image control data 135 has a desired level of accuracy. In other words, this process may recur until the finger position error 426, the relative hand position error 428 and the head position error 430 are within the selected tolerances. [00128] The virtual image control data 135 that has been adjusted to the desired level of accuracy can then be used by the image viewer 414 to adjust the current virtual image 310. The image viewer 414 can then output the current virtual image 310 that has been adjusted. [00129] In this way, the virtual image control data 135 generated over time with the desired level of precision can be used to generate sequence of virtual images 138 in Figure 1. When the virtual image control data 135 is generated with the desired level of precision, the sequence of virtual images 138 can provide the user 104 with a desired level of hand-eye coordination. In other words, the positions of the virtual left hands and the virtual right hands in these virtual images can have a desired level of precision. In addition, the head movement 112, left hand 124 and / or right hand 126 of user 104 can be simulated within the sequence of virtual images 138 with a desired level of precision. [00130] Referring now to Figure 5, an illustration of operating modes for a virtual reality system is revealed according to an illustrative modality. In this example illustration, modes 500 can be modes of operation for virtual reality system 100 in Figure 1. As revealed, modes 500 can include at least one of 502 learning and calibration mode, static mode 504, dynamic mode 506 and transparent mode 508. Obviously, in other illustrative examples, modes 500 may include one or more other modes in addition to or in place of those modes. [00131] In learning and calibration mode 502, user 104 in Figure 1 can perform a real task while using at least one of the handheld systems 131 and head mounted system 108 in Figure 1. This real task can be, for example, example, a non-virtual task that is executed in reality and not in virtual reality. This non-virtual task can be, for example, without limitation, typing on a keyboard, playing a piano, playing a guitar, using different types of equipment, performing a medical procedure, performing a physical task or some other type appropriate non-virtual task. [00132] In the learning and calibration mode 502, the virtual reality manager 106 uses at least one of the hand position data 133 generated by the hand system 131 and the head position data 120 to calibrate the virtual reality system 100 in Figure 1. For example, virtual reality manager 106 can use the data collected while user 104 is performing the non-virtual task to correct inconsistencies in the data generated when user 104 performs a corresponding virtual task using the virtual reality 100. [00133] In static mode 504, user 104 can use virtual reality system 100 to perform a virtual task in which only the hands of user 104 can be used. For example, user 104 can use his left hand 124 and / or his right hand 126 in Figure 1 to perform the virtual task. In static mode 504, all virtual images in the sequence of virtual images 138 displayed to user 104 can remain stationary despite any head movement 112 from user 104 in Figure 1. [00134] For example, the reference frame 146 in Figure 1 for all virtual images in the sequence of virtual images 138 can remain fixed and pointed towards the virtual device being controlled by the hand system 131. The left hand positions virtual 142, virtual right hand 144 and / or other virtual components can switch between the different virtual images in the sequence of virtual images 138 in Figure 1, while the frame 146 remains stationary. In an illustrative example, virtual reality manager 106 can be configured to filter any head position data 120 generated by sensor system 118 in head-mounted system 108 in Figure 1, while user 104 performs the virtual task. [00135] In dynamic mode 506, both hand position data 133 and head position data 120 are used by virtual reality manager 106 to control the sequence of virtual images 138 displayed to user 104. The frame of reference 146 for each image in the sequence of virtual images 138 can switch based on the movement of the head 112 of the user 104. [00136] In transparent mode 508, no virtual image is displayed on the display device 116 in Figure 1. Instead, user 104 may be allowed to see through glasses 114 in Figure 1. In other words, in transparent mode 508, the lenses of glasses 114 can be made transparent. [00137] Illustrations of virtual reality system 100 in Figure 1, left hand system 121 in Figure 2, head mounted system 108 in Figure 3, data coordinator 336 in Figure 4 and modes 500 in Figure 5 it is not intended to imply physical or architectural limitations to the way in which an illustrative modality can be implemented. Components other than or in place of those illustrated can be used. Some components may be unnecessary. In addition, the blocks are presented to illustrate some functional components. One or more of these blocks can be combined, divided or combined and divided into different blocks when deployed in an illustrative modality. [00138] In some illustrative examples, virtual image application 137 can be deployed within virtual reality system 100 in Figure 1. For example, virtual image application 137 can be deployed within data processing system 105. In other illustrative examples, data manager 204 can be deployed separately from sensor system 202 in Figure 2. [00139] In some cases, other modes can be included in the 500 modes of operation for the virtual reality system 100 in addition to and / or in place of those described in Figure 5. In an illustrative example, the learning and calibration mode 502 can be separated into a learning mode and a calibration mode. [00140] Referring now to Figure 6, an illustration of a user using a virtual reality system is revealed according to an illustrative modality. In this illustrative example, user 600 is using virtual reality system 601. User 600 is an example of user 104 in Figure 1. Additionally, virtual reality system 601 is an example of an implementation for virtual reality system 100 in Figure 1. [00141] As revealed, the virtual reality system 601 includes the head mounted system 602, the left sleeve 604 and the right sleeve 606. The head mounted system 602 is an example of an implementation for the head mounted system 108 in Figures 1 and 3. As shown, the head-mounted system 602 includes glasses 608. Glasses 608 are an example of an implementation for glasses 114 in Figures 1 and 3. Additionally, left glove 604 and right glove 606 are examples. of implementations for left glove 128 and right glove 130, respectively, in Figure 1. [00142] In this illustrative example, a virtual reality manager, such as virtual reality manager 106 in Figure 1, can visually present virtual image 610 in glasses 608 in front of the eyes of the user 600. Virtual image 610 can be an example of an implementation for virtual image 140 in Figure 1. Additionally, virtual image 610 is an image of a virtual environment. The virtual reality manager is configured to use data generated by the left glove 604, the right glove 606 and the head-mounted system 602 in response to a movement by the user 600 to allow the user 600 to control and interact with this virtual environment. [00143] Referring now to Figure 7, an illustration of a process for interacting with a virtual engineering environment in the form of a flowchart is revealed according to an illustrative modality. The process illustrated in Figure 7 can be implemented using virtual reality system 100 in Figure 1. In particular, this process can be implemented using virtual reality manager 106 in communication with virtual image application 137 in Figures 1 and 3. [00144] The process can start receiving hand position data for at least one hand from a user of a hand system (operation 700). Hand position data can include position data for a left hand and / or a user's right hand. The hand system may comprise a left glove for the user's left hand and / or a right glove for the user's right hand. The hand system can be, for example, the hand system 131 in Figure 1. [00145] The process can then receive the user's head to head position data from a head mounted system (operation 702). The head-mounted system can be, for example, the head-mounted system 108 in Figures 1 and 3. In an illustrative example, the head-mounted system can take the form of glasses 114 in Figures 1 and 3. [00146] Subsequently, the process identifies the position data based on image and a current frame of reference for a current time using a target image for the current time (operation 704). Operation 704 can be performed using, for example, the image processor 326 in Figure 3. The target image can be, for example, the target image 324 generated using the imaging system 322 in the associated sensor system 118 to a head mounted system 108 in Figure 3. Target image 324 allows a position for left sleeve 128 and right sleeve 130 in target image 324 to be identified. [00147] The process then coordinates the hand position data and the head position data with the image based position data and the current frame of reference to form virtual image control data (operation 706). Operation 706 can be performed using, for example, the data coordinator 336 in Figures 3 and 4. Then, the process generates a current virtual image of a virtual environment using the virtual image control data (operation 708 ), with the process ending later. [00148] Referring now to Figure 8, an illustration of a process for interacting with a virtual engineering environment in the form of a flowchart is revealed according to an illustrative modality. The process illustrated in Figure 8 can be implemented using the virtual reality system 100 in Figure 1. Additionally, this process is a more detailed process than the process described in Figure 7. [00149] The process begins by receiving the left hand position data and the right hand position data from a left glove and a right glove, respectively (operation 800). Left hand position data identifies the positions for a user's left hand over time. The right hand position data identifies positions for a user's right hand over time. [00150] Additionally, the process receives head position data from a sensor system in a head mounted system (802 operation). The head position data identifies the positions for the user's head over time. [00151] The process then receives a sequence of target images from an imaging system in the sensor system of the head-mounted system (operation 804). Subsequently, the process identifies the image-based position data and a current frame of reference for a current time using a target image in the sequence of target images corresponding to the current time (operation 806). [00152] Next, the process synchronizes the left hand position data, the right hand position data and the head position data in relation to time (operation 808). The process then coordinates the left-hand position data, the right-hand position data and the head-position data synchronized with the image-based position data and the current frame of reference to form image control data. virtual (operation 810). [00153] The process sends the virtual image control data to a virtual image application using a cloud (operation 812). The virtual image application is configured to use the virtual image control data to generate a current virtual image of a virtual environment for the current time. The process then receives the current virtual image generated by the virtual image application (operation 814). [00154] The process displays the current virtual image on a display device associated with the head-mounted system (operation 816), with the process returning to operation 800. In operation 800, the current virtual image replaces a previously generated virtual image displayed on the display device. The user sees the current virtual image and can feel present in the virtual environment. [00155] When the process returns to operation 800 after executing operation 816 and repeats operations 802, 804 and 806, the current time used in operation 806 is a time after the current time for the current virtual image in operation 816. In other words, the process described in Figure 8 can be repeated to form a next virtual image. In this way, a sequence of virtual images ordered in relation to time can be generated and displayed on the display device. [00156] Flowcharts and block diagrams in the different modalities revealed illustrate the architecture, functionality and operation of some possible implementations of devices and methods in an illustrative modality. In this regard, each block in flowcharts or block diagrams can represent a module, segment, function and / or a portion of an operation or step. For example, one or more of the blocks can be deployed as program code, in hardware or a combination of program code and hardware. When deployed in hardware, the hardware can, for example, adopt the form of integrated circuits that are manufactured or configured to perform one or more operations in flowcharts or block diagrams. [00157] In some alternative implementations of an illustrative modality, the function or functions observed in the blocks may occur outside the order mentioned in the Figures. For example, in some cases, two blocks shown in succession can be executed substantially and concurrently, or the blocks can sometimes be executed in reverse order, depending on the functionality involved. In addition, other blocks can be added in addition to the blocks illustrated in a flow chart or block diagram. [00158] Turning now to Figure 9, an illustration of a data processing system is revealed according to an illustrative modality. In this illustrative example, data processing system 900 can be used to implement data processing system 105 in Figure 1. In this illustrative example, data processing system 900 includes communications structure 902, which provides communications between the unit processor 904, memory 906, persistent storage 908, communications unit 910, input / output (I / O) unit 912 and display 914. [00159] Processor unit 904 serves to execute instructions for software that can be loaded into memory 906. Processor unit 904 can be numerous processors, a multiprocessor core or some other type of processor, depending on the particular implementation. Numerous, as used in the present invention in reference to an item, means one or more items. In addition, the processor unit 904 can be deployed using numerous heterogeneous processors in which a main processor is present with the secondary processors on a single chip. As another illustrative example, processor unit 904 can be a symmetric multiprocessor system that contains multiple processors of the same type. [00160] Memory 906 and persistent storage 908 are examples of storage devices 916. A storage device is any piece of hardware that is capable of storing information, such as, for example, without limitation, data, program code in functional form and / or other appropriate information on a temporary basis and / or a permanent basis. 916 storage devices can also be called computer-readable storage devices in these examples. The memory 906, in these examples, can be, for example, a random access memory or any other suitable volatile or non-volatile storage device. Persistent storage 908 can take many forms, depending on the particular implementation. [00161] For example, persistent storage 908 can contain one or more components or devices. For example, the 908 persistent storage can be a hard disk, a flash memory, a rewritable optical disc, a rewritable magnetic tape or some combination of the above. The media used by the 908 persistent storage can also be removable. For example, a removable hard drive can be used for 908 persistent storage. [00162] The communications unit 910, in these examples, provides communications with other data processing systems or devices. In these examples, the communications unit 910 is a network interface card. The communications unit 910 can provide communications through the use of wireless or physical communications links. [00163] The 912 input / output unit allows input and output of data with other devices that can be connected to a 900 data processing system. For example, the 912 input / output unit can provide a connection for the user input via a keyboard, mouse and / or some other suitable input device. In addition, the 912 input / output unit can output to a printer. The display 914 provides a mechanism for displaying information to a user. [00164] Instructions for the operating system, applications and / or programs can be found on storage devices 916, which are in communication with processor unit 904 through communications structure 902. In these illustrative examples, instructions are in a functional form in the 908 persistent storage. These instructions can be loaded into memory 906 for execution by the 904 processor unit. The processes of the different modalities can be executed by the 904 processor unit using the instructions implanted by the computer, which can be located in a memory, such as the 906 memory. [00165] These instructions are called program code, program code usable on a computer or program code readable on a computer that can be read and executed by a processor in the 904 processor unit. The program code in the different modes can be incorporated on different physical or computer-readable storage media, such as memory 906 or persistent storage 908. [00166] Program code 918 is located in a functional form on computer readable media 920 which is selectively and can be loaded into or transferred to data processing system 900 for execution by processor unit 904. The program code 918 and computer-readable media 920 form a computer program product 922 in these examples. In one example, computer-readable media 920 may be computer-readable storage media 924 or computer-readable signal media 926. [00167] Computer readable storage media 924 may include, for example, an optical disc or magnetic disc that is inserted or placed in a drive or other device that is part of the persistent storage 908 for transfer to a storage device, such as such as a hard drive, which is part of the 908 persistent storage. The 924 computer-readable storage media can also take the form of persistent storage, such as a hard drive, memory unit, or flash memory, which is connected to a 900 data processing system. In some cases, the computer readable storage media 924 may not be removable from the 900 data processing system. [00168] In these examples, computer-readable storage media 924 is a tangible or physical storage device used to store program code 918 instead of a medium that propagates or transmits program code 918. Storage media readable by computer 924 are also called a computer-readable tangible storage device or a computer-readable physical storage device. In other words, computer-readable storage media 924 are media that can be touched by a person. [00169] Alternatively, program code 918 can be transferred to data processing system 900 using computer-readable signal means 926. Computer-readable signal means 926 can be, for example, a propagated data containing program code 918. For example, computer readable signal means 926 may be an electromagnetic signal, an optical signal and / or any other suitable signal type. These signals can be transmitted over communications links, such as wireless communications links, fiber optic cable, coaxial cable, a wire, and / or any other suitable type of communications link. In other words, the communications link and / or the connection can be physical or wireless in the illustrative examples. [00170] In some illustrative embodiments, program code 918 can be downloaded over a network to persistent storage 908 from another data processing device or system via computer-readable signal means 926 for use within the system 900 data processing software. For example, program code stored on a computer-readable storage medium on a server data processing system can be downloaded over a server network to the 900 data processing system. The data processing system that provides program code 918 can be a server computer, a client computer, or some other device capable of storing and transmitting program code 918. [00171] The different components illustrated for the 900 data processing system are not intended to provide architectural limitations to the way in which different modalities can be implemented. The different illustrative modalities can be implemented in a data processing system that includes components in addition to or in place of those illustrated for the 900 data processing system. Other components shown in Figure 9 can be varied from the illustrative examples shown. The different modalities can be implemented using any hardware device or system capable of operating program code. As an example, the data processing system may include organic components integrated with inorganic components and / or may be comprised entirely of organic components excluding a human being. For example, a storage device can be comprised of an organic semiconductor. [00172] In another illustrative example, processor unit 904 can take the form of a hardware unit that has circuits that are manufactured or configured for a particular use. This type of hardware can perform operations without requiring the program code to be loaded into a storage device's memory to be configured to perform operations. [00173] For example, when the processor unit 904 adopts the form of a hardware unit, the processor unit 904 can be a circuit system, an application specific integrated circuit (ASIC), a programmable logic device or some other appropriate type of hardware configured to perform numerous operations. With a programmable logic device, the device is configured to perform numerous operations. The device can be reconfigured at a later time or it can be permanently configured to perform numerous operations. Examples of programmable logic devices include, for example, a programmable logic arrangement, a field programmable logic arrangement, a field programmable gate arrangement, and other suitable hardware devices. With this type of implementation, program code 918 can be omitted, due to the fact that the processes for the different modalities are implemented in a hardware unit. [00174] In yet another illustrative, the processor unit 904 can be deployed using a combination of processors found in computers and hardware units. Processor unit 904 can have numerous hardware units and numerous processors that are configured to operate program code 918. With this example revealed, some of the processes can be deployed to numerous hardware units, while other processes can be deployed in the number of processors. [00175] In another example, a bus system can be used to implement communications structure 902 and can be comprised of one or more buses, such as a system bus or an input / output bus. Obviously, the bus system can be deployed using any suitable type of architecture that provides a transfer of data between different components or devices attached to the bus system. [00176] Additionally, a communications unit can include numerous devices that transmit data, receive data or transmit and receive data. A communications unit can be, for example, a modem or a network adapter, two network adapters or some combination thereof. In addition, a memory can be, for example, memory 906 or a cache, as found on an interface and a memory controller hub that can be present in communications structure 902. [00177] Thus, the different illustrative modalities provide a method and an apparatus to generate virtual images of a virtual environment. In an illustrative embodiment, a method for generating a virtual image of a virtual environment is provided. A virtual reality manager receives hand position data for at least one hand from a user of a handheld system. The virtual reality manager receives the user's head-to-head position data from a head-mounted system. The virtual reality manager identifies image-based position data and a current frame of reference for a current time using a target image corresponding to the current time. The virtual reality manager generates virtual image control data for the current time using hand position data, head position data, image-based position data and the current frame of reference. The virtual image control data is configured for use by a virtual image application. [00178] The different illustrative modalities provide a method and a system to allow a user to interact with a virtual environment. In particular, virtual reality system 100 in Figure 1 allows a user, like user 104 in Figure 1, to interact with a virtual image application, such as virtual image application 137 in Figure 1. The user can interact with the virtual imaging application in a way that affects the managed virtual environment using the virtual imaging application. [00179] The virtual reality system 100 can be used to perform virtual engineering. For example, a user can use virtual reality system 100 to interact with a virtual engineering environment. The virtual reality system 100 is configured to provide a desired level of hand-eye coordination in such a way that the user can experience a desired level of immersion within the virtual engineering environment. [00180] In the Figures and in the text, in one aspect, a method for generating a virtual image 140 of a virtual environment 102 is revealed, the method includes: receiving, in a virtual reality manager 106, hand position data for at least at least one user hand 104 from a hand system 131; receiving, in the virtual reality manager 106, head position data 120 for a head 112 from user 104 from a head mounted system 108; identify, by the virtual reality manager 106, position data based on image 32S and a current frame of reference 330 for a current time 314 using a target image corresponding to the current time 314; and generate, by virtual reality manager 106, virtual image control data for the current time 314 using hand position data 133, head position data 120, image-based position data 328 and the current frame of reference 330, where virtual image control data 135 is configured for use by a virtual image application 137. [00181] In a variant, the method additionally includes: sending, through virtual reality manager 106, the virtual image control data 135 to the virtual image application 137; and generating, by the virtual image application 137, a current virtual image 310 of the virtual environment 102 for the current time 314 using the virtual image control data 135 for the current time 314. [00182] In another variant, the method that includes the step of generating the current virtual image 310 of the virtual environment 102 for the current time 314 uses the virtual image control data 135 for the current time 314 includes: [00183] placing a virtual left hand 142 and a virtual right hand 144 in the current virtual image 130 based on the virtual image control data 135, wherein the virtual left hand 142 is an image representing a user's left hand 124 and the virtual right hand 144 is an image representing a right hand 126 of user 104. [00184] In yet another variant, the method additionally includes: displaying, via virtual reality manager 106, the current virtual image 310 of the virtual environment 102 to user 104 on a display device 116 associated with the head mounted system 108, wherein the current virtual image 310 replaces an earlier virtual image 312 displayed on the display device 116. In one example, the method includes in which the step of generating the virtual image control data 135 for the current time 314 uses the data of hand position 133, head position data 120, image-based position data 328 and the current frame of reference 330 includes: synchronizing, by virtual reality manager 106, hand position data 133 and head position 120 in relation to time. [00185] In yet another variant, the method additionally includes: generating left hand position data 134 using a left glove 128 configured to substantially conform to user 104's left hand 124; and generating right hand position data 136 using a right glove 130 configured to substantially conform to user right hand 126, wherein left hand position data 134 and right hand position data 136 form the hand position data 133. [00186] In one case, the method includes in which the step of generating the virtual image control data 135 for the current time 314 includes: identifying modified hand position data 422 using the left hand position data 134 in the hand position data 133, in the right hand position data 136 in the right hand position data and in the position data based on image 328; identify the modified head position data 120 using head position data 120 and the current reference frame 330; and generate the virtual image control data 135 using the modified hand position data 422, the modified head position data 120 and a set of restrictions 416. In another case, the method that includes the generating the virtual image control data 135 for the current time 314 further includes identifying the set of restrictions 416 using user data 145, where user data 145 is based on a set of user images 150. [00187] In yet another case, the method that includes the step of generating the virtual image control data 135 using the modified hand position data 422, the modified head position data 120 and the set of restrictions 416 includes: identifying a finger position error 426, a relative hand position error 428 and a head position error 430 using a feedback controller 410, modified hand position data, head position data modified 422 and set of restrictions 416; and generating the virtual image control data 135 with a desired level of precision using the feedback controller 410, the finger position error 426, the relative hand position error 428 and the head position error 430. [00188] In one example, the method that includes virtual environment 102 is a simulation of an engineering environment selected from one among a design environment, a manufacturing environment, a computer environment, a test environment, a data management environment, an inspection environment and an operations environment. [00189] In one aspect, a virtual reality system is revealed including: a head-mounted system 108 configured to be dressed in relation to a head 112 of a user 104; and a virtual reality manager 106 associated with head mounted system 108, wherein virtual reality manager 106 is configured to: receive hand position data 133 for at least one hand from user 104 from a hand system 131; receiving head position data 120 to user head 112 from a sensor system 118 in the head mounted system 108; identifying image-based position data 328 and a current frame of reference 330 for a current time 314 using a target image corresponding to the current time 314; and generate virtual image control data 135 for the current time 314 using hand position data 133, head position data 133, image-based position data 328 and the current reference frame 330, in that virtual image control data 135 is configured for use by a virtual image application 137. [00190] In a variant, the virtual reality system that includes virtual reality manager 106 is configured to send virtual image control data 135 to virtual image application 137 and receive a current virtual image 310 from a virtual environment 102 for the current time 314 of the virtual image application 137. In another variant, the virtual reality system additionally includes: a display device 116 associated with the head-mounted system 108, in which the virtual reality manager 106 is additionally configured to display the current virtual image 310 of the virtual environment 102 to user 104 on the display device 116. In yet another variant, the virtual reality system that includes the current virtual image 310 includes a virtual left hand 142 and a virtual right hand 144 wherein the virtual left hand 142 is an image representing a left hand 124 of the user 104 and the virtual right hand 14 is an image representing a hand to the right 126 of user 104. [00191] In yet another variant, the virtual reality system that includes virtual reality manager 106 comprises: a data coordinator 336 configured to synchronize hand position data 133 and head position data 120 in relation to time, when data coordinator 336 includes: a hand data modulator 418 configured to identify modified hand position data 422 using left hand position data 134 in hand position data 133, right hand position 136 in hand position data 133 and image-based position data 328; a head data modulator 420 configured to identify modified head position data 120 using head position data 120 and the current reference frame 330; a control data generator 412 configured to generate virtual image control data 135 using modified hand position data 422, modified head position data 120 and a set of constraints 416; and a constraint identifier 406 configured to identify constraint set 416 using user data 145, where user data 145 is based on a set of user images 150. [00192] In one case, the virtual reality system that includes data coordinator 336 additionally includes: a feedback controller 410 configured to identify a finger position error 426, a relative hand position error 428 and an error of head position 430 using modified hand position data 422, modified head position data 120 and constraint set 416, where control data generator 412 is configured to use the finger position error 428, the relative hand position error 430 and the head position error 430 to generate the virtual image control data 135. In yet another case, the virtual reality system that includes the virtual reality system additionally includes: [00193] the hand system 131, wherein the hand system 131 includes: a left glove 128 configured to generate left hand position data 134, where the left glove 128 is configured to substantially conform to a left hand 124 user 104; and a right glove 130 configured to generate right hand position data 136, wherein the right glove 130 is configured to substantially conform to a user's right hand 126. [00194] In yet another case, the virtual reality system that includes the virtual image control data 135 is used to control a virtual image 140 of a virtual environment 102 in which the virtual environment 102 is a simulation of an environment of engineering 103 selected from one of a design environment, a manufacturing environment, a computer environment, a test environment, a data management environment, an inspection environment and an operations environment. [00195] In one aspect, a computer is revealed that includes: a bus; a non-transient storage device connected to the bus, wherein the non-transient storage device includes program code; and a processor unit 904 connected to the bus, where processor unit 904 is configured to execute the program code to receive hand position data 133 for at least one hand from a user 104 of a hand system 131; receiving head position data 120 for a head 112 from user 104 from a head mounted system 108; identifying position data based on image 328 and a current frame of reference 330 for a current time 134 using a target image corresponding to the current time 134; and generate virtual image control data 135 for the current time 314 using hand position data 133, head position data 120, image-based position data 328 and the current reference frame 330, in that virtual image control data 135 is configured for use by a virtual image application 137. [00196] In a variant, the computer that includes the processor unit 904 is additionally configured to run the program code to display a current virtual image 310 for the current time 314, received from virtual image application 137, on a display device display 116 associated with head mounted system 108. [00197] The description of the different illustrative modalities was presented for purposes of illustration and the description is not intended to be furthered or limited to the modalities in the revealed form. Many modifications and variations will be evident for those elements of common knowledge in the art. In addition, different illustrative modalities can provide different resources compared to other illustrative modalities. The selected modality or modalities are chosen and described in order to better explain the principles of the modalities, of practical application and to allow other elements of common knowledge in the art to understand the disclosure for various modalities with various modifications as appropriate for the particular use contemplated.
权利要求:
Claims (8) [0001] 1. Method for generating a virtual image (140) of a virtual environment (102), the method characterized by the fact that it comprises: receiving, in a virtual reality manager (106), hand position data for at least one hand a user (104) of a hand system (131); receiving, in the virtual reality manager (106), head position data (120) for a user head (112) (104) from a head mounted system (108); provide a target image by an imaging system (322) to identify, by the virtual reality manager (106), image-based position data (328) and a current frame of reference (330) for a current time (314) using the target image corresponding to the current time (314); and generate, through the virtual reality manager (106), virtual image control data for the current time (314) using hand position data (133), head position data (120), position based data in image (328) and the current frame of reference (330), in which the virtual image control data (135) are configured for use by a virtual image application (137), and in which the method still comprises: adjust repeatedly, through a feedback controller, the virtual image control data (135) until they have the desired level of accuracy using a finger position error, a relative hand position error and a head position error , the virtual image control data (135) having the desired level of precision, when a sequence of virtual images of the virtual environment generated by the virtual image application represents a substantially real-time simulation of the user's movement and presence within the virtual environment; where the finger position error is a difference between finger positions identified in modified hand position data and simulated finger positions in a current virtual image, where the relative hand position error is a difference between hand positions identified in modified hand position data and virtual hand positions in the current virtual image, where the head position error is a difference between the head position identified in modified head position data and the head position in the current virtual image ; and where the step of generating the virtual image control data (135) for the current time (314) comprises: identifying the modified hand position data (422) using left hand position data (134) in the data of hand position (133), right hand position data (136) in the right hand position data, and image-based position data (328); identifying the modified head position data (120) using the head position data (120) and the current frame of reference (330); generating the virtual image control data (135) using the modified hand position data (422), the modified head position data (120) and a set of constraints (416); and identifying the set of restrictions (416) using user data (145), where the user data (145) is based on a set of user images (150). [0002] 2. Method, according to claim 1, characterized by the fact that it still comprises: sending, through the virtual reality manager (106), the virtual image control data (135) to the virtual image application (137); and generate, through the virtual image application (137), a current virtual image (310) of the virtual environment (102) for the current time (314) using the virtual image control data (135) for the current time ( 314); where the step of generating the current virtual image (310) of the virtual environment (102) for the current time (314) using the virtual image control data (135) for the current time (314) includes: placing a left hand virtual (142) and a virtual right hand (144) in the current virtual image (130) based on the virtual image control data (135), where the virtual left hand (142) is an image representing a left hand ( 124) of the user (104) and the virtual right hand (144) is an image representing a right hand (126) of the user (104). [0003] 3. Method, according to claim 1 or 2, characterized by the fact that it still comprises: displaying, through the virtual reality manager (106), the current virtual image (310) of the virtual environment (102) to the user (104 ) on a display device (116) associated with the head mounted system (108), in which the current virtual image (310) replaces an earlier virtual image (312) displayed on the display device (116); and generating left hand position data (134) using a left glove (128) configured to substantially conform to a user's left hand (124) (104); and generating right hand position data (136) using a right glove (130) configured to substantially conform to a user's right hand (126) (104), where left hand position data (134 ) and the right hand position data (136) form the hand position data (133), in which the step of generating the virtual image control data (135) for the current time (314) using the data of hand position (133), head position data (120), image-based position data (328) and the current frame of reference (330) comprises: synchronizing, through the virtual reality manager (106), the hand position data (133) and head position data (120) in relation to time. [0004] 4. Method according to claim 1, characterized by the fact that the step of generating the virtual image control data (135) using modified hand position data (422), the modified head position data (120 ) and the set of restrictions (416) comprises: identifying a finger position error (426), a relative hand position error (428) and a head position error (430) using a feedback controller (410) , the modified hand position data, the modified head position data (422) and the set of restrictions (416); and generate the virtual image control data (135) with the desired level of precision using the feedback controller (410), the finger position error (426), the relative hand position error (428) and the head position (430); where the virtual environment (102) is a simulation of an engineering environment selected from one of a design environment, manufacturing environment, computer environment, test environment, data management environment, inspection environment and environment operations. [0005] 5. Virtual reality system characterized by the fact that it comprises: a head-mounted system (108) configured to be worn in relation to a user's head (112) (104); an imaging system (322) configured to provide a target image; and a virtual reality manager (106) associated with the head mounted system (108), in which the virtual reality manager (106) is configured to: receive hand position data (133) for at least one user hand ( 104) of a hand system (131); receiving head position data (120) to the user's head (112) (104) from a sensor system (118) in the head mounted system (108); identifying image-based position data (328) and a current frame of reference (330) for a current time (314) using a target image corresponding to the current time (314); generate virtual image control data (135) for the current time (314) using hand position data (133), head position data (133), image-based position data (328), and the current frame of reference (330), in which the virtual image control data (135) is configured for use by a virtual image application (137); where the virtual reality manager is still configured for: where the method still comprises: repeatedly adjusting, via a feedback controller, the virtual image control data until they have the desired level of accuracy using a position error finger, a relative hand position error and a head position error, the virtual image control data (135) having the desired level of accuracy, when a sequence of virtual images of the virtual environment generated by the virtual image application it represents a substantially real-time simulation of the user's movement and presence within the virtual environment; where the finger position error is a difference between finger positions identified in modified hand position data and simulated finger positions in a current virtual image, where the relative hand position error is a difference between hand positions identified in modified hand position data and virtual hand positions in the current virtual image, and where the head position error is a difference between the head position identified in modified head position data and head position in the virtual image current; and where generating the virtual image control data (135) for the current time (314) comprises: identifying the modified hand position data (422) using left hand position data (134) in the hand position data (133), right hand position data (136) in the right hand position data, and image-based position data (328); identifying the modified head position data (120) using the head position data (120) and the current frame of reference (330); generating the virtual image control data (135) using the modified hand position data (422), the modified head position data (120) and a set of constraints (416); and identifying the set of restrictions (416) using user data (145), where the user data (145) is based on a set of user images (150). [0006] 6. Virtual reality system, according to claim 5, characterized by the fact that it still comprises: a display device (116) associated with the head-mounted system (108), in which the virtual reality manager (106) is additionally configured to display a current virtual image (310) of the virtual environment (102) to the user (104) on the display device (116); wherein the virtual reality manager (106) is configured to send the virtual image control data (135) to the virtual image application (137) and receive the current virtual image (310) from the virtual environment (102) to the current time (314) of the virtual image application (137); where the current virtual image (310) includes a virtual left hand (142) and a virtual right hand (144) where the virtual left hand (142) is an image representing the user's left hand (124) (104) and the virtual right hand (144) is an image representing the user's right hand (126) (104). [0007] 7. Virtual reality system, according to claim 5 or 6, characterized by the fact that the virtual reality manager (106) comprises: a data coordinator (336) configured to synchronize the hand position data ( 133) and the head position data (120) in relation to time, in which the data coordinator (336) comprises: a hand data modulator (418) configured to identify modified hand position data (422) using left hand position data (134) in hand position data (133), right hand position data (136) in hand position data (133) and image-based position data (328); a head data modulator (420) configured to identify modified head position data (120) using the head position data (120) and the current frame of reference (330); a control data generator (412) configured to generate virtual image control data (135) using modified hand position data (422), modified head position data (120) and a set of constraints (416); and a constraint identifier (406) configured to identify the constraint set (416) using user data (145), where user data (145) is based on a set of user images (150); wherein the data coordinator (336) further comprises: a feedback controller (410) configured to identify a finger position error (426), a relative hand position error (428) and a head position error ( 430) using the modified hand position data (422), the modified head position data (120) and the constraint set (416), where the control data generator (412) is configured to use the error finger position (428), the relative hand position error (430) and the head position error (430) to generate the virtual image control data (135). [0008] 8. Virtual reality system according to any one of claims 5 to 7, characterized by the fact that the virtual reality system still comprises: the hand system (131), in which the hand system (131) comprises: a left glove (128) configured to generate left hand position data (134), wherein the left glove (128) is configured to substantially conform to a user's left hand (124); and a right glove (130) configured to generate right hand position data (136), wherein the right glove (130) is configured to substantially conform to a user's right hand (126) (104); wherein the virtual image control data (135) is used to control a virtual image (140) of a virtual environment (102) where the virtual environment (102) is a simulation of an engineering environment (103) selected from from one among design environment, manufacturing environment, computer environment, test environment, data management environment, inspection environment and operations environment.
类似技术:
公开号 | 公开日 | 专利标题 BR102013007952B1|2020-10-20|method to generate a virtual image of a virtual environment and virtual reality system Hilfert et al.2016|Low-cost virtual reality environment for engineering and construction Billinghurst et al.2015|A survey of augmented reality Spanlang et al.2014|How to build an embodiment lab: achieving body representation illusions in virtual reality Izard et al.2017|Virtual reality educational tool for human anatomy CN106537261B|2019-08-27|Holographic keyboard & display KR102212250B1|2021-02-03|Body-locked placement of augmented reality objects US9330502B2|2016-05-03|Mixed reality simulation methods and systems Azuma1995|Predictive tracking for augmented reality CN103443742B|2017-03-29|For staring the system and method with gesture interface BR102013027355A2|2014-10-21|VIRTUAL REALITY DISPLAY SYSTEM US10657696B2|2020-05-19|Virtual reality system using multiple force arrays for a solver CN106484115A|2017-03-08|For strengthening the system and method with virtual reality JP2013214285A5|2016-03-03| US10846934B2|2020-11-24|Camera-based object tracking and monitoring CN108369478A|2018-08-03|Hand for interaction feedback tracks Lugrin et al.2012|CaveUDK: a VR game engine middleware US20200363867A1|2020-11-19|Blink-based calibration of an optical see-through head-mounted display Sziebig et al.2009|Achieving Total Immersion: Technology Trends behind Augmented Reality- A Survey US10854098B1|2020-12-01|Adaptive visual overlay wound simulation Capece et al.2018|A low-cost full body tracking system in virtual reality based on microsoft kinect US20200126280A1|2020-04-23|Information processing system, information processing device, server device, image providing method and image generation method Song et al.2012|An immersive VR system for sports education Dontschewa et al.2018|Mixed reality smart glasses application for interactive working Miltiadis2015|Virtual Architecture in a Real-time, Interactive, Augmented Reality Environment-project Anywhere and the potential of Architecture in the age of the Virtual
同族专利:
公开号 | 公开日 EP2648073A3|2015-03-18| CN103365416A|2013-10-23| BR102013007952A2|2015-08-04| JP2013214285A|2013-10-17| KR101965454B1|2019-04-03| US9170648B2|2015-10-27| AU2013200623B2|2017-10-26| AU2013200623A1|2013-10-17| CN103365416B|2017-05-24| EP2648073B1|2020-01-08| EP2648073A2|2013-10-09| JP6063749B2|2017-01-18| KR20130112735A|2013-10-14| US20130257904A1|2013-10-03|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 JP3311025B2|1991-07-12|2002-08-05|株式会社東芝|Information retrieval device| US5381158A|1991-07-12|1995-01-10|Kabushiki Kaisha Toshiba|Information retrieval apparatus| US5320538A|1992-09-23|1994-06-14|Hughes Training, Inc.|Interactive aircraft training system and method| US5495576A|1993-01-11|1996-02-27|Ritchey; Kurtis J.|Panoramic image based virtual reality/telepresence audio-visual system and method| US5980256A|1993-10-29|1999-11-09|Carmein; David E. E.|Virtual reality system with enhanced sensory apparatus| US5436638A|1993-12-17|1995-07-25|Fakespace, Inc.|Image display method and apparatus with means for yoking viewpoint orienting muscles of a user| US6127990A|1995-11-28|2000-10-03|Vega Vista, Inc.|Wearable display and methods for controlling same| US6012926A|1996-03-27|2000-01-11|Emory University|Virtual reality system for treating patients with anxiety disorders| JP2002525769A|1998-09-22|2002-08-13|ヴェガヴィスタインコーポレイテッド|Direct control of portable data display| JP2000102036A|1998-09-22|2000-04-07|Mr System Kenkyusho:Kk|Composite actual feeling presentation system, composite actual feeling presentation method, man-machine interface device and man-machine interface method| JP3363837B2|1999-06-11|2003-01-08|キヤノン株式会社|User interface device and information processing method| US6771294B1|1999-12-29|2004-08-03|Petri Pulli|User interface| US7190331B2|2002-06-06|2007-03-13|Siemens Corporate Research, Inc.|System and method for measuring the registration accuracy of an augmented reality system| US9013396B2|2007-01-22|2015-04-21|Textron Innovations Inc.|System and method for controlling a virtual reality environment by an actor in the virtual reality environment| US8615383B2|2008-01-18|2013-12-24|Lockheed Martin Corporation|Immersive collaborative environment using motion capture, head mounted display, and cave| JP4318056B1|2008-06-03|2009-08-19|島根県|Image recognition apparatus and operation determination method| US8267781B2|2009-01-30|2012-09-18|Microsoft Corporation|Visual target tracking| US8577084B2|2009-01-30|2013-11-05|Microsoft Corporation|Visual target tracking| JP5388909B2|2010-03-09|2014-01-15|株式会社日立製作所|Hypervisor, computer system, and virtual processor scheduling method| WO2012135547A1|2011-03-29|2012-10-04|Qualcomm Incorporated|Cloud storage of geotagged maps| AU2012348348B2|2011-10-28|2017-03-30|Magic Leap, Inc.|System and method for augmented and virtual reality|JP5993127B2|2011-10-25|2016-09-14|オリンパス株式会社|Head-mounted display device, information terminal, program, information storage medium, image processing system, head-mounted display device control method, and information terminal control method| US20150185825A1|2013-12-30|2015-07-02|Daqri, Llc|Assigning a virtual user interface to a physical object| EP3166306A4|2014-07-02|2018-02-28|Sony Corporation|Video-processing device, video processing method, and program| US20160027220A1|2014-07-24|2016-01-28|JRC Integrated Systems, Inc.|Low latency methodologies for a headset-mounted camera on virtual reality displays| US10726625B2|2015-01-28|2020-07-28|CCP hf.|Method and system for improving the transmission and processing of data regarding a multi-user virtual environment| US9852546B2|2015-01-28|2017-12-26|CCP hf.|Method and system for receiving gesture input via virtual control objects| US10725297B2|2015-01-28|2020-07-28|CCP hf.|Method and system for implementing a virtual representation of a physical environment using a virtual reality environment| RU2601169C1|2015-06-11|2016-10-27|Виталий Витальевич Аверьянов|Method and device for interaction with virtual objects| CN105824411A|2016-03-07|2016-08-03|乐视致新电子科技(天津)有限公司|Interaction control method and device based on virtual reality| CN105913715A|2016-06-23|2016-08-31|同济大学|VR sharable experimental system and method applicable to building environmental engineering study| CN107798704B|2016-08-30|2021-04-30|成都理想境界科技有限公司|Real-time image superposition method and device for augmented reality| US10650621B1|2016-09-13|2020-05-12|Iocurrents, Inc.|Interfacing with a vehicular controller area network| CN107515673B|2017-08-03|2020-05-08|合肥云内动力有限公司|Mutual inductance glove and automobile line detection method based on same| ES2704373B2|2017-09-15|2020-05-29|Seat Sa|Method and system to display virtual reality information in a vehicle| CN108196669B|2017-12-14|2021-04-02|网易(杭州)网络有限公司|Game role model correction method and device, processor and head-mounted display equipment| DE102018200011A1|2018-01-02|2019-07-04|Ford Global Technologies, Llc|Test system and method for testing a control of an at least partially autonomous vehicle in a virtual environment| CN108762489B|2018-05-07|2021-09-21|武汉灏存科技有限公司|Control method based on data glove, system and storage medium| KR102200998B1|2018-11-27|2021-01-11|투케이시스템|Virtual reality based musculoskeletal health status measurement system|
法律状态:
2015-08-04| B03A| Publication of an application: publication of a patent application or of a certificate of addition of invention| 2018-12-04| B06F| Objections, documents and/or translations needed after an examination request according art. 34 industrial property law| 2019-11-19| B06U| Preliminary requirement: requests with searches performed by other patent offices: suspension of the patent application procedure| 2020-08-25| B09A| Decision: intention to grant| 2020-10-20| B16A| Patent or certificate of addition of invention granted|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 02/04/2013, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US13/438,692|2012-04-03| US13/438,692|US9170648B2|2012-04-03|2012-04-03|System and method for virtual engineering| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|