专利摘要:
The general field of the invention is that of the methods of using an aircraft man-machine interface device comprising at least one speech recognition unit (13), a display device (10) with a tactile interface (11), a graphics interface calculator (12) and an electronic computing unit (14), the assembly being arranged to graphically display a plurality of commands, each command being classified into at least a first so-called critical category and a said second non-critical category, each non-critical command having a plurality of options, each option having a name, said names assembled in a database (140) called "lexicon". The method according to the invention comprises steps of recognition of the displayed commands, activation of the speech recognition unit, comparison between the tactile and voice information and a validation step.
公开号:FR3044436A1
申请号:FR1502480
申请日:2015-11-27
公开日:2017-06-02
发明作者:Francois Michel;Stephanie Lafon;Jean Baptiste Bernard
申请人:Thales SA;
IPC主号:
专利说明:

Method of using a human-machine interface device for an aircraft comprising a speech recognition unit
The field of the invention is that of man-machine interactions in the cockpit of an aircraft and more specifically that of systems comprising a voice-activated device and a tactile device.
In modern cockpits, the interactions between the pilot and the aircraft are made by means of different man-machine interfaces. The main ones involve interactions with the dashboard display devices that display the main navigation and control parameters necessary for the smooth running of the flight plan or the execution of the mission. For this purpose, tactile surfaces are increasingly being used which allow simple interactions with the display devices.
To further simplify the driver's interactions with the onboard system, it is possible to use speech as a means of interaction via a voice recognition system.
Speech recognition has been experimentally studied in the avionics field. In order to ensure compatible recognition of use in a potentially noisy aeronautical environment, solutions based on a limited dictionary of commands and on prior learning of the user have been implemented. In addition, these solutions require the use of an alternation, for example, a physical button in the cockpit, which can trigger and stop the voice recognition.
It is also possible to use a touchpad to trigger voice recognition. Thus, the application WO2010 / 144732 entitled "Touch anywhere to speak" describes a system for mobile electronic devices triggering speech recognition by tactile interaction. This request is silent on the security aspects specific to the aeronautical field and does not propose solutions to improve the reliability of speech recognition in noisy environment.
Thus, the current solutions require a physical alternation, the training by the pilot of the list of commands available by voice recognition and a system of acknowledgment of the result. In addition, speech recognition performance generally limits its use.
The method of using a man-machine interface device for aircraft comprising a speech recognition unit according to the invention does not have these disadvantages. It allows to: - Limit tedious tactile interactions like typing on a virtual keyboard, generating errors or discomforts, especially during atmospheric turbulence; - Provide a level of security compatible with aeronautical standards on the use of voice recognition; - Limit the pilot learning of the voice command dictionary by placing the voice command in a specific and limited context, thus considerably reducing the risk of errors.
It also provides simple management of critical commands and non-critical commands. By critical command is meant an order likely to endanger the safety of the device. Thus, the start or the extinction of the engines is a critical order. Non-critical control means an order that has no significant impact on the safety of the flight or the aircraft. Thus, a radio frequency change is not a critical command.
More specifically, the subject of the invention is a method of using a human-machine interface device for an aircraft comprising at least one speech recognition unit, a tactile interface display device, an interface calculator graphics and an electronic computing unit, the assembly being arranged to graphically present a plurality of commands, each command being classified into at least a first so-called critical category and a second so-called non-critical category, each non-critical command having a plurality of options, each option having a name, said names assembled in a database called "lexicon", characterized in that, - when the command is critical, the method of use comprises the following steps: o Recognition of the critical command activated by a user using the touch interface; o Activation of the speech recognition unit according to said command; o Comparison between the speech decoded by the speech recognition unit and the activated command; o Validation of the activated command if the decoded speech corresponds to the activated command; when the command is non-critical, the method of use comprises the following steps: Recognition of the non-critical command activated by a user by means of the tactile interface; o Activation of the speech recognition unit according to said command; o Comparison between the speech decoded by the speech recognition unit and the names of the lexicon associated with the activated command; o Selection of the name of the lexicon corresponding best to the decoded speech; o Display of the option corresponding to the name of the lexicon.
Advantageously, when the command is non-critical, the option corresponding to the name of the lexicon is automatically implemented.
Advantageously, the activation function of the speech recognition unit is only active for a limited time beginning at the instant of the recognition of the command activated by a user by means of the touch interface.
Advantageously, this duration is proportional to the size of the lexicon.
Advantageously, this duration is less than or equal to 10 seconds. The invention will be better understood and other advantages will become apparent on reading the description which follows, given by way of non-limiting example and with reference to the appended FIG. 1 which represents the block diagram of a human-machine interface device for an aircraft according to the invention.
The method according to the invention is implemented in a human-machine interface device for aircraft and, more specifically, in its electronic computing unit. By way of example, all the means of the human-interface device 1 are represented in FIG. 1. It comprises at least: a display device 10 with a tactile interface 11; a graphical interface calculator 12; a speech recognition unit 13; - An electronic computing unit 14. This unit is surrounded by a dotted line in Figure 1.
The display device 10 is conventionally a liquid crystal flat screen. Other techniques are possible. It presents information on the piloting or the navigation or the avionics system of the aircraft. The touch interface 11 is in the form of a transparent touchscreen disposed on the screen of the display device. This touch screen is similar to the touch screens implemented on "tablets" or on "smartphones" intended for the general public. Several technical solutions, well known to those skilled in the art, make it possible to produce this type of touch screen. The graphical interface 12 is a computer which, from various data from the sensors or databases of the aircraft develops the graphical information sent to the display device. This information includes a number of commands. Each order has a number of possible options. For example, the "transmit frequency" command has a number of possible frequency options. The graphical interface 12 also retrieves the information from the touch screen that is transformed into command or validation instructions for the rest of the avionics system. The speech recognition unit 13 conventionally comprises a micro-receiver 130 and speech processing means for recognizing the words uttered by a user. Again, these different means are known to those skilled in the art. This unit is configurable in the sense that you can specify at any time the lexicon commands / words to recognize. The speech recognition unit is active only for a limited period of time beginning at the instant of recognition of the user activated command by means of the touch interface. The triggering and stopping of voice recognition is therefore an intelligent mechanism: - The triggering of the voice command is done with tactile interaction allowing a pilot to determine the object to be modified; - The duration of the recognition is determined either by the end of the first tactile interaction, or according to the lexicon, as soon as a recognition according to the lexicon is detected. There is also a time limit that depends on the selected lexicon and prevents speech recognition from remaining active indefinitely if, for example, the end of the touch interaction is not detected. The value of this duration depends on the lexicon selected. This duration is generally less than or equal to 10 seconds.
For non-critical commands, the computing unit 14 includes a number of databases called "Lexicon" 140. Each lexicon has words or names corresponding to a particular command option. Thus, the command "Frequency" only includes names of frequency codes or frequency values. Figure 1 comprises, by way of non-limiting example, three bases called "Lexicon 1", "Lexicon 2" and "Lexicon 3". The electronic computing unit 14 performs the following specific tasks: - "Referee" 141. This term covers a dual function. The first function 142 consists in activating / deactivating the voice recognition. It is symbolized by an on / off switch in FIG. 1. The purpose of this function is to activate voice recognition only when the user selects an order via the touch screen, that is, say in good time. The second function 143 consists in specifying the correct lexicon of commands or words to be recognized according to the command. Clearly, the selected lexicon that corresponds to lexicon 1 in Figure 1 corresponds to the options of a single command. This function 143 is symbolized by a multiple choice switch in FIG. 1; 144. This function verifies that the result of the voice command is in accordance with the lexicon selected by the referee 141 and transmits it to the graphic interface 12 if this conformity is acquired. The graphical interface then displays a request for confirmation or validation with the pilot.
As has been said, there are two types of commands, called critical and noncritical.
As a first example, to illustrate the operation of the human-machine interface according to the invention in the case of a critical command, it is assumed that the fire broke out on the left engine and the pilot wishes to turn off the engine. engine.
By pressing the touch interface on a displayed virtual button allowing the left engine to stop, the pilot must simultaneously pronounce "stop of the left engine" while continuing to press the stop button of the left engine. The action is validated by the system only if the phrase "stop of the left engine" is recognized by the speech recognition unit.
As a second example, to illustrate the operation of the human-machine interface according to the invention in the case of a non-critical command, it is assumed that the graphical interface displays a radio frequency and that the pilot wishes to modify this frequency .
On a cockpit display screen, the current value of said radio frequency for "VHF" communications is displayed. A touch on the touch screen by the pilot on the position of the representation of this frequency triggers the voice recognition for a determined duration as well as the selection of the lexicon for recognizing radio frequencies. This lexicon includes, for example, a set of particular values. As the pilot has designated a frequency, he can naturally pronounce a new value for the frequency, speech recognition analysis according to the lexicon restricted to possible frequencies. If the recognized word is part of the lexicon, then the counter 144 provides a text value that is displayed near the current value. The driver can validate or not the new value by a second touch interaction. Validation can also be automatic when the new choice has no serious consequences.
The advantages of this man-machine interface are as follows.
The first benefit is device security in both critical and non-critical commands. Security is an essential point of interfaces for aeronautical applications. First, the speech recognition is restricted to a particular context, the recognition of a frequency in the previous example, which allows to guarantee a higher security of the device than devices operating blind. In addition, tactile information and voice recognition are redundant. Finally, insofar as the time during which the speech recognition is active is limited, untimely recognitions are avoided and the result of the voice command can be checked against the possible values.
The second advantage is the greater capacity of options of the device. The combination of tactile and voice recognition allows the recognition of more commands while securing the use of voice recognition. Indeed, instead of a single lexicon of words to recognize, speech recognition is based on a plurality of lexicons. Each of these lexicons is small in size but the sum of these lexicons allows for a large number of command options.
The third advantage is the great ergonomics of the device. Indeed, the designation of the object to be modified allows the pilot to naturally know the nature of the voice command to pass and thus decreases the learning required by the voice command. Moreover, the selection of the correct lexicon, as well as the speech recognition, are naturally triggered by a tactile interaction on an element of the man / machine interface of the cockpit. This device thus allows a natural and efficient interaction between the pilot and the embedded system since the touch is used to designate the parameter to be modified and the voice to give the new value.
The fourth advantage is the removal of a physical "alternat", that is to say a means of starting and stopping voice recognition. This alternation is most often a mechanical control button. In the device according to the invention, switching on and off is done intelligently, only when the voice recognition must be requested.
权利要求:
Claims (5)
[1" id="c-fr-0001]
A method of using an aircraft man-machine interface device comprising at least one speech recognition unit (13), a touch-sensitive display device (10) (11), a computer graphic interfaces (12) and an electronic computing unit (14), the assembly being arranged to graphically present a plurality of commands, each command being classified into at least a first so-called critical category and a second so-called non-critical category, each command non-critical having a plurality of options, each option having a name, said names assembled in a database (140) called "lexicon", characterized in that, when the command is critical, the method of use comprises the following steps: o Recognition of the critical command activated by a user using the touch interface; o Activation of the speech recognition unit according to said command; o Comparison between the speech decoded by the speech recognition unit and the activated command; o Validation of the activated command if the decoded speech corresponds to the activated command; when the command is non-critical, the method of use comprises the following steps: Recognition of the non-critical command activated by a user by means of the tactile interface; o Activation of the speech recognition unit according to said command; o Comparison between the speech decoded by the speech recognition unit and the names of the lexicon associated with the activated command; o Selection of the name of the lexicon corresponding best to the decoded speech; o Display of the option corresponding to the name of the lexicon.
[2" id="c-fr-0002]
2. A method of using a human-machine interface device according to claim 1, characterized in that, when the command is non-critical, the option corresponding to the name of the lexicon is automatically implemented.
[3" id="c-fr-0003]
3. A method of using a human-machine interface device according to claim 1, characterized in that the activation function of the speech recognition unit is active only for a limited time starting from the instant of recognition of the command activated by a user using the touch interface.
[4" id="c-fr-0004]
4. A method of using a human-machine interface device according to claim 3, characterized in that this duration is proportional to the size of the lexicon.
[5" id="c-fr-0005]
5. A method of using a human-machine interface device according to one of claims 3 or 4, characterized in that this duration is less than or equal to 10 seconds.
类似技术:
公开号 | 公开日 | 专利标题
US10304448B2|2019-05-28|Environmentally aware dialog policies and response generation
US10909331B2|2021-02-02|Implicit identification of translation payload with neural machine translation
US10186254B2|2019-01-22|Context-based endpoint detection
US9575720B2|2017-02-21|Visual confirmation for a recognized voice-initiated action
EP3173924A1|2017-05-31|Method for using an aircraft human-machine interface device having a speech recognition unit
KR20170065563A|2017-06-13|Eye glaze for spoken language understanding in multi-modal conversational interactions
JP6457715B2|2019-01-23|Surface visible objects off screen
KR20190082294A|2019-07-09|Modality learning in mobile devices
US9524142B2|2016-12-20|System and method for providing, gesture control of audio information
JP2020502719A|2020-01-23|Hands-free navigation of touch operation system
US20160239259A1|2016-08-18|Learning intended user actions
US20170116184A1|2017-04-27|Dynamic user interface locale switching system
US9898654B2|2018-02-20|Translating procedural documentation into contextual visual and auditory guidance
FR3052889B1|2019-11-08|VIBRATION-INDUCED ERROR CORRECTION FOR TOUCH SCREEN DISPLAY IN AN AIRCRAFT
US10540451B2|2020-01-21|Assisted language learning
US20200302923A1|2020-09-24|Mitigation of client device latency in rendering of remotely generated automated assistant content
KR20180121254A|2018-11-07|Electronic device for ouputting graphical indication
US20140359433A1|2014-12-04|Text selection paragraph snapping
KR20180055638A|2018-05-25|Electronic device and method for controlling electronic device using speech recognition
US9983695B2|2018-05-29|Apparatus, method, and program product for setting a cursor position
US11157167B2|2021-10-26|Systems and methods for operating a mobile application using a communication tool
KR102011036B1|2019-08-14|Method and system for voice control of notifications
US20210082412A1|2021-03-18|Real-time feedback for efficient dialog processing
WO2020263411A1|2020-12-30|System and method for cooperative text recommendation acceptance in a user interface
EP3807748A1|2021-04-21|Customizing user interfaces of binary applications
同族专利:
公开号 | 公开日
EP3173924A1|2017-05-31|
FR3044436B1|2017-12-01|
CN106814909A|2017-06-09|
US20170154627A1|2017-06-01|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
FR2913279A1|2007-03-01|2008-09-05|Airbus France Sa|Flight information e.g. radio channel frequency, input assisting system for aircraft, has voice recognition device detecting flight information emitted by crew, and transcription and analyzing unit transcripting and analyzing information|
WO2010144732A2|2009-06-10|2010-12-16|Microsoft Corporation|Touch anywhere to speak|
EP2849169A1|2013-09-17|2015-03-18|Honeywell International Inc.|Messaging and data entry validation system and method for aircraft|
US6438519B1|2000-05-31|2002-08-20|Motorola, Inc.|Apparatus and method for rejecting out-of-class inputs for pattern classification|
US10201906B2|2008-06-24|2019-02-12|Richard J. Greenleaf|Flexible grip die-alignment arrangement|
DE112012006165T5|2012-03-30|2015-01-08|Intel Corporation|Touchscreen user interface with voice input|
WO2014109104A1|2013-01-08|2014-07-17|クラリオン株式会社|Voice recognition device, voice recognition program, and voice recognition method|CN108594998A|2018-04-19|2018-09-28|深圳市瀚思通汽车电子有限公司|A kind of onboard navigation system and its gesture operation method|
FR3099749B1|2019-08-06|2021-08-27|Thales Sa|CONTROL PROCESS OF A SET OF AVIONIC SYSTEMS, COMPUTER PROGRAM PRODUCT AND ASSOCIATED SYSTEM|
CN110706691B|2019-10-12|2021-02-09|出门问问信息科技有限公司|Voice verification method and device, electronic equipment and computer readable storage medium|
法律状态:
2016-10-28| PLFP| Fee payment|Year of fee payment: 2 |
2017-06-02| PLSC| Publication of the preliminary search report|Effective date: 20170602 |
2017-10-26| PLFP| Fee payment|Year of fee payment: 3 |
2018-10-26| PLFP| Fee payment|Year of fee payment: 4 |
2019-10-29| PLFP| Fee payment|Year of fee payment: 5 |
2020-10-26| PLFP| Fee payment|Year of fee payment: 6 |
2021-11-09| PLFP| Fee payment|Year of fee payment: 7 |
优先权:
申请号 | 申请日 | 专利标题
FR1502480A|FR3044436B1|2015-11-27|2015-11-27|METHOD FOR USING A MAN-MACHINE INTERFACE DEVICE FOR AN AIRCRAFT HAVING A SPEECH RECOGNITION UNIT|FR1502480A| FR3044436B1|2015-11-27|2015-11-27|METHOD FOR USING A MAN-MACHINE INTERFACE DEVICE FOR AN AIRCRAFT HAVING A SPEECH RECOGNITION UNIT|
EP16199845.5A| EP3173924A1|2015-11-27|2016-11-21|Method for using an aircraft human-machine interface device having a speech recognition unit|
US15/360,888| US20170154627A1|2015-11-27|2016-11-23|Method for using a human-machine interface device for an aircraft comprising a speech recognition unit|
CN201611233206.4A| CN106814909A|2015-11-27|2016-11-25|Use the method for the human-computer interface device for aircraft including voice recognition unit|
[返回顶部]