![]() METHOD AND APPARATUS FOR RECOGNITION OF GESTURE
专利摘要:
USER INTERFACE, APPARATUS AND METHOD FOR GESTURE RECOGNITION User interface, device and method for gesture recognition comprising: predicting one or more possible commands for the device based on one or more complementary gestures performed by a user previously, indicate the command or commands possible. 公开号:BR112013014287B1 申请号:R112013014287-1 申请日:2010-12-30 公开日:2020-12-29 发明作者:Wei Zhou;Jun Xu;Xiaojun Ma 申请人:Interdigital Ce Patent Holdings; IPC主号:
专利说明:
FIELD OF THE INVENTION The present invention relates, in general, to gesture recognition, and more particularly, to a user interface, apparatus and method for gesture recognition in an electronic system. BACKGROUND OF THE INVENTION As the range of activities that take place with a computer increases, new and innovative ways to provide an interface between the user and the machine are often developed to provide a more natural user experience. For example, a touch screen can allow a user to provide information to a computer without a mouse and / or keyboard, so there is no need for a desk area to operate the computer. Gesture recognition is also receiving more and more attention due to its potential use in sign language recognition, multimodal human-computer interaction, virtual reality and robot control. Gesture recognition is an area that is developing rapidly in the computer world, which allows a device to recognize certain hand gestures made by the user, so that certain functions of the device can be performed based on the gesture. Gesture recognition systems based on computer vision are proposed to facilitate a more “natural”, efficient and effective user-machine interface. No computer vision, to increase the accuracy in recognizing the gesture, it is necessary to display the related captured video from the camera on the screen. And this type of video can help indicate to the user whether it is possible that his gesture will be recognized correctly and whether he needs to make any adjustments to his position or not. However, displaying captured video from the camera will usually have a negative impact on the user watching the program on the screen. Therefore, it is necessary to find a way that can minimize the change in the program displayed on the screen, while maintaining high accuracy in recognition. On the other hand, recently, more and more compound gestures (such as grabbing and dropping) are applied in UI (user interface). These compound gestures usually include sub-gestures and are more difficult to recognize than a single gesture. The US20100050133 “Compound Gesture Recognition” patent by H. kieth Nishihara and others, filed on August 22, 2008, proposes a method that includes multiple cameras and tries to detect and translate the different sub-inputs into different information for a different device. However, the cost and distribution of multiple cameras limit the application of this method in domestic uses. Therefore, it is important to study composite gesture recognition in the user interface system. SUMMARY OF THE INVENTION The invention relates to a user interface in a gesture recognition system comprising: a display window adapted to indicate a subgestion of at least one command per gesture, according to one or more subgestions performed by a user and received by the system gesture recognition previously. The invention also relates to an apparatus comprising: a gesture prediction unit configured to predict one or more possible commands for the apparatus based on one or more subgestions performed by a user previously; a display configured to indicate the possible command or command. The invention also relates to a method for gesture recognition comprising: providing for one or more possible commands for the device based on one or more subgestions performed by a user previously; indicate the command or commands possible. BRIEF DESCRIPTION OF THE DRAWINGS These and other aspects, characteristics and advantages of the present invention will become evident from the following description of a modality in connection with the accompanying drawings: Figure 1 is a block diagram showing an example of a gesture recognition system according to an embodiment of the invention; Figure 2 shows a diagram of hand gestures used to explain the invention; Figure 3 is a diagram showing examples of the user interface display window according to the modality of the invention; Figure 4 is a diagram showing a user interface region on the display screen according to the modality; Figure 5 is a flowchart showing a method of controlling the opacity of the display window; Figure 6 is a flowchart showing a method for recognizing gesture according to the embodiment of the invention. It should be understood that the drawings are intended to illustrate the concepts of the disclosure and are not necessarily the only possible configuration to illustrate the disclosure. DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS In the following detailed description, a user interface, an apparatus and a method for gesture recognition are described to provide a complete understanding of the present invention. However, it will be recognized by the person skilled in the art that the present invention can be practiced without these specific details or with equivalents thereof. In other cases, well-known methods, procedures, components and circuits have not been described in detail to avoid unnecessarily distracting attention from aspects of the present invention. A user can provide simulated information to a computer, TV or other electronic device. It should be understood that the simulated information can be provided by a composite gesture, a single gesture, or even any body gesture performed by the user. For example, the user could provide gestures that include a predefined movement in a gesture recognition environment. The user provides the gesture information, for example, by one or both hands of the user; by a stick, by a needle, by a pointed stick; or by a variety of other devices with which the user can gesture. The simulated information could be, for example, simulated mouse information, to establish, for example, a reference to the visual content displayed and to execute a command in portions of the visual content to which the reference relates. Figure 1 is a block diagram showing an example of a gesture recognition system 100 according to an embodiment of the invention. As shown in Figure 1, the gesture recognition system 100 includes a camera 101, a display screen 102, a screen 108-1, a screen 108-2, a display controller 104, a gesture predictor 105, a unit gesture recognition 106 and a gesture database 107. For example, camera 101 is mounted above display screen 102, and screens 108-1 and 108-2 are located respectively on the left and right sides of the display screen. display 102. The user who is in front of the display screen 102 can provide simulated information for the gesture recognition system 100 by an input object. In the modality, the entry object is demonstrated as a user's hand, so that the simulated information can be provided through hand gestures. It should be understood that the use of the hand to provide information simulated by hand gestures is just an example implementation of the gesture recognition system 100. In addition, in the example of making gestures with the user's hand as an information object for provide simulated information, the user's hand may incorporate a glove and / or sensors for the fingertip or joint or it may be a user's bare hand. In the embodiment of Figure 1, the camera 101 can quickly take photographic images of the users' hand gesture at, for example, thirty times per second, and the images provided to the gesture recognition unit 106 to identify the user's gesture. Gesture recognition is receiving more and more attention recently due to its potential use in sign language recognition, multimodal human-computer interaction, virtual reality, and robot control. Most prior art gesture recognition methods combine sequences of observed images with training samples or a model. The sequence of information is classified as being the class whose samples or models best fit into it. Dynamic Time Variation (DTW), Continuous Dynamic Programming (CDP), Hidden Markov Model (HMM) and Conditional Random Field (CRF) are exemplary methods of this category in the prior art. HMM is the most widely used technique for gesture recognition. The method of detailed recognition for subgestions will not be described here. The gesture recognition unit 106, the gesture predictor 105, the display controller 104 and the gesture database 107 can be located, for example, inside a computer (not shown) or on embedded processors, in order to process the respective images associated with the input object to generate the control instruction indicated in a display window 103 of the display screen 102. According to the modality, entries of a single gesture and of gestures composed by users can be identified. A composite gesture can be a gesture with which multiple subgestions can be employed to provide multiple inputs from related devices. For example, a first subgestion can be a reference gesture to relate to a portion of the visual content and a second subgestion can be an execution gesture that can be performed immediately after the first subgestion, in order to execute a command on the portion of the visual content. visual content to which the first subgestion relates. The single gesture includes only a subgestion, and is performed immediately after the subgestion is identified. Figure 2 shows the exemplary manual gesture used to explain the invention. As shown in Figure 2, a composite gesture includes several sub-gestures (or subsequent gestures), and depends on what function it represents. We call the first sub-gesture the main gesture and the last one the termination gesture. In 3D UI (three-dimensional user interface), there are many functions that share that same first gesture. For example, a typical compound gesture is "take and leave". In this case, a user can take a scene from a TV show using his hand gesture and drop it onto a nearby frame or device screen by making a hand gesture that means to LEAVE. Here, the definition of a composite gesture includes three portions (subgestions): pick up, drop and where to drop. For example, in the user's living room there are a TV set and two tablets that are placed respectively on the left and right side of the TV respectively as shown in Figure 1. And these two tablets have already registered with the system and connected with the Gesture database 107. Thus, compound “pick and drop” gestures include two types. One has two “grab and drop left” sub-entries as shown in Figure 2 (b), which means that the screen content indicated by the user will be left on the tablet on the left, and transmitted to the tablet on the left 108-1 by database 107, and another type has “pick and drop on the right” as shown in Figure 2 (a), which means that the screen content indicated by the user will be left on the tablet on the right, and transmitted to the tablet at right 108-2 by database 107. These two types have in common the same first sub-item “to catch”. Certainly, if the second subgestion is still “picking up”, which is the same first “picking up” gesture as shown in Figure 2 (c), and then “picking up” is maintained for more than a second, this means that this compound gesture it contains only a “grab” sub-section and the content on the screen will be stored or dropped locally. Returning to Figure 1, the gesture predictor 105 of the gesture recognition system 100 is configured to predict one or more possible gesture commands for the device based on the gesture or user gestures previously recognized by the gesture recognition unit 106 and its sequence or order. To make the prognosis, another compound gesture database 107 is needed, which is configured to store the predefined gestures with a specific command function. When the gesture images obtained by the camera 101 are recognized by the gesture recognition unit 106, the result of the recognition, for example, a predefined subgestion will be fed to the gesture predictor 105. Then, consulting the database of gestures 107 based on the recognition result, the gesture predictor 105 will predict one or more possible gesture commands and the next subgestion of the possible gesture commands will be shown as an indication in a display window 103. For example, when the first subgestion “To catch” is recognized, consulting database 107, the predictor can draw a conclusion that there are three possible candidates for this gesture composed “take and leave on the left”, “take and leave on the right” and “just take ”. In database 107, there are also simple and compound sub-gestures such as: when the main sub-gestation is “waving the right hand”, the termination gestures can be respectively “waving the right hand”, “waving the two hands”, “'raising the hand right ", or" stay still ". For example, the main gesture means turning on the TV set. If the termination gesture is “wave your right hand”, this means that the TV set shows the set-to-box program. If the termination gesture is "waving both hands", this means that the TV set shows the program from a media server. If the termination gesture is “raise your right hand”, this means that the TV set displays the program from a DVD (digital video disc). If the termination gesture is "waving both hands", this means that the TV set shows the program from a media server. If the termination gesture is “stand still”, it means that the TV set will not show any programs. Although the invention is explained by taking the compound gesture "take and leave" and sub-swallows in two stages as an example, this cannot be considered a limit for the invention. According to the modality, the display window 103 showing a user interface of the gesture recognition system 100 is used to indicate the following command subgestion or possible commands obtained by the gesture predictor 105, together with information on how to perform a gesture. next possible complete command. Figure 3 is a diagram showing examples of the display window 103 according to the embodiment of the invention. Here, the size and location of the display window can be selected by a person skilled in the art as required, and can cover the entire screen image on the display screen 102 or be transparent to the image. The display window 103 on the display screen 102 is controlled by the display controller 104. The display controller 104 will provide some guidance or instructions on how to perform the next subgestion for each composite gesture provided by the gesture predictor 105 according to the gestures. predefined in the database list 107, and those indications or instructions are shown in the display window 103 for suggestions along with information about the commands. For example, display window 103 on display screen 102 could highlight a region on the screen as a display window to help the user continue with their next sub-entries. In this region, several suggestions, for example, dotted lines with arrows or curved dotted lines are used to show the next sub-section of possible commands. Command information includes “grab and drop left” to guide the user to move the left hand, “grab and drop the right” to guide the user to move the right hand, and “just grab” to guide the user to keep that gesture of picking up. In addition, an indication of the subgestion received by the gesture recognition system 100 is also shown in a corresponding location for the suggestions in the display window 103. The indication can be the image received by the system or any images representing the subgestion. Adobe Flash®, Microsoft Silverlight® and Java FX® can all be used by the display controller to implement this type of application as indicated in the display window 103. Furthermore, the suggestions are not limited to those described above, and can be implemented as any other indications as needed by a person skilled in the art only if the suggestions can help users follow one of them to complete the gesture command. Figure 4 is a diagram showing a region on the display screen 102 according to the modality. As shown in Figure 4, the opacity of displaying the above indication and instructions is a key parameter to help the gesture recognition process gradually become clearer. For example, the Alpha value in “RGBA” (Red Green Blue Alpha) technology is a blend value (0 ~ 1), which is used to describe the opacity value (0 ~ 1) of the region to reflect the progress of the gesture recognition and help the gesture recognition process gradually become clearer. For example, a first catch sub-intake has been recognized and the suggestions are shown in the display window, then the user is performing the composite gesture “take and drop left” following one of the suggestions, which is also recognized by the recognition unit, the suggestions for the “take and drop left” and “just take” gestures in the display window will disappear, as shown in Figure 4 (a). At the same time, the opacity of the display window will decrease with progress in the execution of the “take and drop left” gesture as shown in Figure 4 (b). Figure 5 is a flowchart showing a control method for the opacity of the display window used by the display controller 104 using the compound “pick and drop” gesture as an example. In step 501, the decision to see if a catch gesture is performed by the user is implemented, which means that the catch gesture is recognized by the recognition unit. If the answer is no, the method goes to step 510, and the controller remains in standby. On the other hand, the alpha blend value of the direction lines or suggestions of where to drop for all adjacent subgestion steps and current subgestion step are all set to be 1 in step 502. This means that all information in the display window is displayed clearly. Then, in step 503, judge whether the grabbing gesture is stopped for a specific time according to the result of the recognition unit, and if the answer is affirmative, this means that “grab only” is being performed, and then the value alpha blending of direction lines or suggestions of where to drop for all adjacent subgestion steps are all set to be 0 in step 506. This means that all adjacent subgestions will disappear in the window. And if the answer in step 503 is no, then the method goes to step 505 to judge the direction of movement of the catch gesture. If the gesture moves in one direction according to the recognition result, the alpha blend value of the direction lines or suggestions for where to drop to other directions are all set to be 0 in step 507. So, if the drop gesture is executed according to the recognition result in step 508, the alpha blend value of the direction lines or suggestions of where to drop for the direction at that time will also be adjusted to be 0 or decreased gradually to 0 in step 509. On the other hand On the other hand, if the "just take" gesture is being performed, and the drop or store step is being implemented, the alpha mix value of your suggestion will also be adjusted to zero or gradually decreased to zero. Figure 6 is a flowchart showing a method for recognizing gesture according to the embodiment of the invention. According to the modality of the invention, when the first subgestion is recognized based on the location of the hand and other characteristics of the hand, the estimate of which hand gestures will be made can be achieved based on knowledge of the entire definition of gestures in the database. Dice. Then a window will appear on the display screen to show the gesture and suggestions for the evaluated gesture commands. Then when the second subgestion is recognized, the number of estimation results for the commands per gesture based on the result of recognizing the first and second subgestions will change. Typically, the number will be less than that based on the first subgestion only. In the same way as in the description in the paragraph above, the new estimation result will be analyzed and suggestions on how to finish the next sub-section of commands will be given. In addition, if the number of estimation results decreases, the opacity of the window will decrease as well. The change in the opacity of the window can be seen as another type of suggestion for identifying the composite gesture. As shown in Figure 6, the user's gesture, like the first subgestion, is identified by the gesture identification unit 106 in step 601. Then, in step 602 the predictor 105 will predict one or more possible commands for the system based on in the subgestion or subgestions recognized in step 601, and the next subgestion of at least one possible command is indicated by a user interface in a display window in step 603. Then, when another subgestion of a command is being executed, others will disappear from the user interface in step 604, and the opacity of the display window will be decreased in step 605. Then, when the user has finished the command by gesture, the display window will also disappear in step 606. Although the modality is described based on the first and second subgestions, another subgestion recognition and suggestions for its next subgestion of commands shown in the user interface are also applicable in the inventive modality. If another subgestion is not received or the command gesture has ended, the display window will disappear from the screen. What has been described only illustrates the modality of the invention, and therefore it will be appreciated that those skilled in the art will be able to think of numerous alternative arrangements that, although not explicitly described here, incorporate the principles of the invention and are within its spirit and scope.
权利要求:
Claims (9) [0001] 1. Apparatus, comprising: a gesture prediction unit configured to predict one or more possible commands to the apparatus based on one or more sub-gestures and an order of one or more sub-gestures previously executed by a user and recognized by the apparatus; a display configured to display an indication of a subsequent subgestion of one or more possible commands in a user interface in a display window, CHARACTERIZED by the fact that the display window has an opacity, and after an initial prediction, the opacity of the The display window is reduced when the number of one or more possible commands decreases compared to the number of one or more possible commands previously provided. [0002] 2. Apparatus according to claim 1, CHARACTERIZED by the fact that the display is also configured to indicate a next sub-entry in the user interface by a suggestion along with information on how to perform the next gesture to complete the commands. [0003] 3. Apparatus, according to claim 1 or 2, CHARACTERIZED by the fact that the display is also configured to indicate one or more subgestions recognized by the apparatus. [0004] 4. Apparatus according to any one of claims 1 to 3, CHARACTERIZED by the fact that when the next subgestion of one or more possible commands is executed by the user and recognized by the device, the display is also being configured to make the following subgestions other possible commands disappear in the user interface. [0005] 5. Apparatus according to any one of claims 1 to 4, CHARACTERIZED by the fact that the gesture prediction unit is configured to predict the one or more possible commands using the one or more recognized sub-inputs and the order of one or more subgestions to search in a database, where the database comprises the definition of gestures of at least one command per gesture, each command per gesture comprises at least one subgestion in a pre-defined order. [0006] 6. A method for recognizing gestures on a device, comprising: providing one or more possible commands to the device based on one or more subgestions and an order of one or more subgestions previously recognized by the device; indicate a subsequent subgestion of one or more possible commands by a user interface in an indication in a viewport, CHARACTERIZED by the fact that the viewport has an opacity, and after the initial forecast, the opacity of the viewport is reduced when the number of one or more possible commands decreases compared to the number of one or more possible commands previously provided. [0007] 7. Method, according to claim 6, CHARACTERIZED by the fact that the next subgestion is indicated by a suggestion shown in the user interface, and an indication of one or more subgestions recognized by the device is also shown in the 5 user interface. [0008] 8. Method, according to claim 6 or 7, CHARACTERIZED by the fact that one or more possible commands are predicted using the one or more recognized sub-entries and the order of the one or more sub-entries to search in a database, in which the database comprises the definition of the management of at least one command per 10 gestures, each command per gesture comprises at least one subgestion in a pre-defined order. [0009] 9. Method, according to claim 7, CHARACTERIZED by the fact that the suggestion is shown together with information on how to execute the next subgestion to complete at least one command.
类似技术:
公开号 | 公开日 | 专利标题 BR112013014287B1|2020-12-29|METHOD AND APPARATUS FOR RECOGNITION OF GESTURE JP6038898B2|2016-12-07|Edge gesture JP5980913B2|2016-08-31|Edge gesture KR102151286B1|2020-09-02|Multi-modal user expressions and user intensity as interactions with an application CN106687889A|2017-05-17|Display-efficient text entry and editing US9965039B2|2018-05-08|Device and method for displaying user interface of virtual input device based on motion recognition EP3189397B1|2019-12-18|Information processing device, information processing method, and program CN107003727B|2020-08-14|Electronic device for running multiple applications and method for controlling electronic device US10275122B2|2019-04-30|Semantic card view US20150234567A1|2015-08-20|Information processing apparatus, information processing method and computer program US20180321739A1|2018-11-08|Electronic device and method for controlling display JP2016192122A|2016-11-10|Information processing device, information processing method, and program US10580382B2|2020-03-03|Power and processor management for a personal imaging system based on user interaction with a mobile device KR20160101605A|2016-08-25|Gesture input processing method and electronic device supporting the same CN106796810A|2017-05-31|On a user interface frame is selected from video US9684445B2|2017-06-20|Mobile gesture reporting and replay with unresponsive gestures identification and analysis US10528371B2|2020-01-07|Method and device for providing help guide EP3479221A1|2019-05-08|Electronic device and information providing method thereof US20190056858A1|2019-02-21|User interface modification JP2015035120A5|2016-01-14| US11093122B1|2021-08-17|Graphical user interface for displaying contextually relevant data JP2012003723A|2012-01-05|Motion analyzer, motion analysis method and program WO2021141746A1|2021-07-15|Systems and methods for performing a search based on selection of on-screen entities and real-world entities Nishimoto2014|Multi-User Interface for Scalable Resolution Touch Walls
同族专利:
公开号 | 公开日 BR112013014287A2|2016-09-20| KR101811909B1|2018-01-25| JP2014501413A|2014-01-20| US20130283202A1|2013-10-24| EP2659336B1|2019-06-26| KR20140014101A|2014-02-05| WO2012088634A1|2012-07-05| EP2659336A4|2016-09-28| CN103380405A|2013-10-30| AU2010366331B2|2016-07-14| JP5885309B2|2016-03-15| AU2010366331A1|2013-07-04| EP2659336A1|2013-11-06|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 CA2323856A1|2000-10-18|2002-04-18|602531 British Columbia Ltd.|Method, system and media for entering data in a personal computing device| US7343566B1|2002-07-10|2008-03-11|Apple Inc.|Method and apparatus for displaying a window for a user interface| US7665041B2|2003-03-25|2010-02-16|Microsoft Corporation|Architecture for controlling a computer using hand gestures| US7466859B2|2004-12-30|2008-12-16|Motorola, Inc.|Candidate list enhancement for predictive text input in electronic devices| KR100687737B1|2005-03-19|2007-02-27|한국전자통신연구원|Apparatus and method for a virtual mouse based on two-hands gesture| JP4684745B2|2005-05-27|2011-05-18|三菱電機株式会社|User interface device and user interface method| JP4602166B2|2005-06-07|2010-12-22|富士通株式会社|Handwritten information input device.| WO2007052382A1|2005-11-02|2007-05-10|Matsushita Electric Industrial Co., Ltd.|Display-object penetrating apparatus| JP4267648B2|2006-08-25|2009-05-27|株式会社東芝|Interface device and method thereof| KR101304461B1|2006-12-04|2013-09-04|삼성전자주식회사|Method and apparatus of gesture-based user interface| US20090049413A1|2007-08-16|2009-02-19|Nokia Corporation|Apparatus and Method for Tagging Items| US20090100383A1|2007-10-16|2009-04-16|Microsoft Corporation|Predictive gesturing in graphical user interface| JP2010015238A|2008-07-01|2010-01-21|Sony Corp|Information processor and display method for auxiliary information| US8972902B2|2008-08-22|2015-03-03|Northrop Grumman Systems Corporation|Compound gesture recognition| TW201009650A|2008-08-28|2010-03-01|Acer Inc|Gesture guide system and method for controlling computer system by gesture| US8285499B2|2009-03-16|2012-10-09|Apple Inc.|Event recognition| US7983450B2|2009-03-16|2011-07-19|The Boeing Company|Method, apparatus and computer program product for recognizing a gesture| JP5256109B2|2009-04-23|2013-08-07|株式会社日立製作所|Display device| CN101706704B|2009-11-06|2011-05-25|谢达|Method for displaying user interface capable of automatically changing opacity| US8622742B2|2009-11-16|2014-01-07|Microsoft Corporation|Teaching gestures with offset contact silhouettes| JP2011204019A|2010-03-25|2011-10-13|Sony Corp|Gesture input device, gesture input method, and program| TWI514194B|2010-06-18|2015-12-21|Prime View Int Co Ltd|Electronic reader and displaying method thereof| JP5601045B2|2010-06-24|2014-10-08|ソニー株式会社|Gesture recognition device, gesture recognition method and program| US20120044179A1|2010-08-17|2012-02-23|Google, Inc.|Touch-based gesture detection for a touch-sensitive device| US8701050B1|2013-03-08|2014-04-15|Google Inc.|Gesture completion path display for gesture-based keyboards|JP5585505B2|2011-03-17|2014-09-10|セイコーエプソン株式会社|Image supply apparatus, image display system, image supply apparatus control method, image display apparatus, and program| KR101322465B1|2011-11-17|2013-10-28|삼성전자주식회사|Method and apparatus for taking a self camera recording| SE537553C2|2012-08-03|2015-06-09|Crunchfish Ab|Improved identification of a gesture| KR101984683B1|2012-10-10|2019-05-31|삼성전자주식회사|Multi display device and method for controlling thereof| JP6212918B2|2013-04-18|2017-10-18|オムロン株式会社|Game machine| US20150007117A1|2013-06-26|2015-01-01|Microsoft Corporation|Self-revealing symbolic gestures| US9740923B2|2014-01-15|2017-08-22|LenovoPte. Ltd.|Image gestures for edge input| CN103978487B|2014-05-06|2017-01-11|宁波易拓智谱机器人有限公司|Gesture-based control method for terminal position of universal robot| CN104615984B|2015-01-28|2018-02-02|广东工业大学|Gesture identification method based on user task| JP6355829B2|2015-04-17|2018-07-11|三菱電機株式会社|Gesture recognition device, gesture recognition method, and information processing device| US9967717B2|2015-09-01|2018-05-08|Ford Global Technologies, Llc|Efficient tracking of personal device locations| US9914418B2|2015-09-01|2018-03-13|Ford Global Technologies, Llc|In-vehicle control location| US10046637B2|2015-12-11|2018-08-14|Ford Global Technologies, Llc|In-vehicle component control user interface| WO2017104525A1|2015-12-17|2017-06-22|コニカミノルタ株式会社|Input device, electronic device, and head-mounted display| US10082877B2|2016-03-15|2018-09-25|Ford Global Technologies, Llc|Orientation-independent air gesture detection service for in-vehicle environments| US9584653B1|2016-04-10|2017-02-28|Philip Scott Lyren|Smartphone with user interface to externally localize telephone calls| US9914415B2|2016-04-25|2018-03-13|Ford Global Technologies, Llc|Connectionless communication with interior vehicle components| DE102016212240A1|2016-07-05|2018-01-11|Siemens Aktiengesellschaft|Method for interaction of an operator with a model of a technical system| CN108520228A|2018-03-30|2018-09-11|百度在线网络技术(北京)有限公司|Gesture matching process and device| CN112527093A|2019-09-18|2021-03-19|华为技术有限公司|Gesture input method and electronic equipment| CN110795015A|2019-09-25|2020-02-14|广州视源电子科技股份有限公司|Operation prompting method, device, equipment and storage medium|
法律状态:
2019-01-08| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]| 2019-07-16| B25G| Requested change of headquarter approved|Owner name: THOMSON LICENSING (FR) | 2019-08-13| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]| 2019-09-17| B25A| Requested transfer of rights approved|Owner name: INTERDIGITAL CE PATENT HOLDINGS (FR) | 2020-11-17| B09A| Decision: intention to grant [chapter 9.1 patent gazette]| 2020-12-29| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 30/12/2010, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 PCT/CN2010/002206|WO2012088634A1|2010-12-30|2010-12-30|User interface, apparatus and method for gesture recognition| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|