![]() Method and apparatus for controlling device
专利摘要:
A method and apparatus for controlling a device are disclosed. The method includes: performing (S11) voice recognition on a received sound signal to obtain a voice recognition result; determining (S12) keywords using the voice recognition result; determining (S13) a target intelligent device having attribute information matched with the keywords from intelligent devices, where relationships between the intelligent devices and attribute information of the intelligent devices are constructed in advance, and the attribute information characterizes an operation provided by the intelligent device corresponding to the attribute information; and controlling (S14) the target intelligent device to perform an operation indicated by the voice recognition result. 公开号:EP3690877A1 申请号:EP19200596.5 申请日:2019-09-30 公开日:2020-08-05 发明作者:Fuxin LI 申请人:Beijing Xiaomi Intelligent Technology Co Ltd; IPC主号:G10L15-00
专利说明:
[0001] The disclosure relates to the field of computer technology, and more particularly to a method and an apparatus for controlling a device. BACKGROUND [0002] Two hands can be freed from operation by voice interaction. For example, the user can control an intelligent lamp in the house to be turned on or turned off by voice control without the manual operation of the user. Therefore, the voice interaction is favored by more and more users. Also, a high intelligence level of the voice interaction is required by the user. SUMMARY [0003] In order to overcome the problems in the related art, a method and an apparatus for controlling a device are provided in the disclosure. [0004] According to a first aspect of the embodiments of the disclosure, a method for controlling a device is provided, which includes the following operations. [0005] Voice recognition is performed on a received sound signal to obtain a voice recognition result. [0006] Keywords are determined using the voice recognition result. [0007] A target intelligent device having attribute information matched with the keywords is determined from intelligent devices. [0008] Relationships between the intelligent devices and attribute information of the intelligent devices are constructed in advance, and the attribute information characterizes an operation provided by the intelligent device corresponding to the attribute information. [0009] The target intelligent device is controlled to perform an operation indicated by the voice recognition result. [0010] According to the embodiments of the disclosure, a target intelligent device to be controlled can be determined according to the keywords determined based on the sound signal and the relationships between the intelligent devices and the attribute information of the intelligent devices, and the target intelligent device is controlled to perform the operation indicated by the voice recognition result, thereby improving the intelligence level of the voice interaction. [0011] According to an exemplary embodiment, relationships between the intelligent devices and basic information of the intelligent devices may be constructed in advance. The basic information may include one or more of a name, an identifier, a type, a position and a characteristic of the intelligent device. The keywords may include a first keyword for characterizing an intention and a second keyword for characterizing basic information. [0012] The operation that the target intelligent device having attribute information matched with the keywords is determined from intelligent devices may include the following operations. [0013] The first keyword for characterizing the intention may be matched with attribute information corresponding to each of the intelligent devices. [0014] In response to attribute information of at least two intelligent devices being matched with the first keyword for characterizing the intention, the second keyword for characterizing basic information may be matched with the basic information of the at least two intelligent devices, and an intelligent device having the basic information matched with the keyword for charactering the basic information may be determined as the target intelligent device. [0015] In response to attribute information of one intelligent device being matched with the first keyword for characterizing the intention, the second keyword for characterizing basic information may be matched with the basic information of the one intelligent device. In response to the second keyword for characterizing basic information being matched with the basic information of the one intelligent device successfully, the one intelligent device may be determined as the target intelligent device. [0016] According to an exemplary embodiment, the operation that the target intelligent device having attribute information matched with the keywords is determined from the intelligent devices includes the following operations. [0017] Candidate intelligent devices having attribute information matched with the keywords may be determined from the intelligent devices. [0018] In response to there being an intelligent device out of the candidate intelligent devices which has performed an operation in a preset reference time period, the intelligent device may be determined as the target intelligent device. [0019] According to an exemplary embodiment, the method may further include the following operations. [0020] Historical control data may be acquired. The historical control data may include relationships between control instructions determined historically and intelligent devices operating in response to the control instructions. The control instruction may be any one of the voice recognition result and at least one keyword included in the voice recognition result. [0021] For each of the control instructions, an intelligent device with the highest probability that the intelligent device operates in response to the control instruction may be determined according to the historical control data. [0022] The operation that the target intelligent device having attribute information matched with the keywords is determined from the intelligent devices may include the following operations. [0023] An intelligent device having the highest probability that the intelligent device operates in response to the control instruction that corresponds to the voice recognition result may be determined. [0024] The determined intelligent device may be determined as the target intelligent device. [0025] According to an exemplary embodiment, the operation that the target intelligent device having attribute information matched with the keywords is determined from the intelligent devices may include the following operations. [0026] Candidate intelligent devices having attribute information matched with the keywords may be determined from the intelligent devices. [0027] For each of the candidate intelligent devices, a weight of a keyword matched with the attribute information of the candidate intelligent device may be determined. [0028] The candidate intelligent device having the attribute information matched with the keyword having the largest weight may be determined as the target intelligent device. [0029] According to an exemplary embodiment, the operation that the target intelligent device having attribute information matched with the keywords is determined from the intelligent devices may include the following operations. [0030] Candidate intelligent devices having attribute information matched with the keywords may be determined from the intelligent devices. [0031] For each of the candidate intelligent devices, the number of keywords matched with the attribute information of the candidate intelligent device may be determined. [0032] The candidate intelligent device having attribute information matched with the greatest number of keywords may be determined as the target intelligent device. [0033] According to an exemplary embodiment, the keywords may include a first keyword for characterizing an intention and a third keyword for characterizing an application range. [0034] The operation that the target intelligent device having attribute information matched with the keywords is determined from the intelligent devices may include the following operation. [0035] A target intelligent device having attribute information matched with the first keyword for characterizing the intention may be determined from the intelligent devices. [0036] The operation that the target intelligent device is controlled to perform the operation indicated by the voice recognition result may include the following operation. [0037] The target intelligent device may be controlled to perform an operation indicated by the voice recognition result within the application range. [0038] According to a second aspect of the embodiments of the disclosure, an apparatus for controlling a device is provided, which includes a recognition result acquiring module, a keyword determining module, a first determining module and a control module. [0039] The recognition result acquiring module is configured to perform voice recognition on a received sound signal to obtain a voice recognition result. [0040] The keyword determining module is configured to determine keywords using the voice recognition result. [0041] The first determining module is configured to determine a target intelligent device having attribute information matched with the keywords from intelligent devices. [0042] Relationships between the intelligent devices and attribute information of the intelligent devices are constructed in advance, and the attribute information characterizes an operation provided by the intelligent device corresponding to the attribute information. [0043] The control module is configured to control the target intelligent device to perform an operation indicated by the voice recognition result. [0044] The advantages and technical effects of the apparatus according to the disclosure correspond to those of the method presented above. [0045] According to an exemplary embodiment, relationships between the intelligent devices and basic information of the intelligent devices may be constructed in advance. The basic information may include one or more of a name, an identifier, a type, a position and a characteristic of the intelligent device, and the keywords may include a first keyword for characterizing an intention and a second keyword for characterizing basic information. [0046] The first determining module may include a first information matching sub-module, a first determining sub-module and a second determining sub-module. [0047] The first information matching sub-module may be configured to match the first keyword for characterizing the intention with attribute information corresponding to each of the intelligent devices. [0048] The first determining sub-module may be configured to, in response to the attribute information of at least two intelligent devices being matched successfully with the first keyword for characterizing the intention, match the second keyword for characterizing basic information with the basic information of each of the at least two intelligent devices, and determine the intelligent device having the basic information matched successfully with the second keyword for characterizing basic information as the target intelligent device [0049] The second determining sub-module may be configured to, in response to the attribute information of one intelligent device being matched successfully with the first keyword for characterizing the intention, match the second keyword for characterizing basic information with the basic information of the one intelligent device, and determine the one intelligent device as the target intelligent device in responsive to that the second keyword for characterizing basic information is matched with the basic information of the one intelligent device successfully. [0050] According to an exemplary embodiment, the first determining module may include a third determining sub-module and a fourth determining sub-module, [0051] The third determining sub-module may be configured to determine candidate intelligent devices having attribute information matched with the keywords from the intelligent devices. [0052] The fourth determining sub-module may be configured to, in response to there being an intelligent device out of the candidate intelligent devices which has performed an operation in a preset reference time period, determine the intelligent device as the target intelligent device. [0053] According to an exemplary embodiment, the apparatus may further include a data acquiring module and a second determining module. [0054] The data acquiring module may be configured to acquire historical control data. The historical control data includes relationships between control instructions determined historically and intelligent devices operating in response to the control instructions, and the control instruction is any one of forms comprising: the voice recognition result, and at least one keyword included in the voice recognition result. [0055] The second determining module may be configured to, for each of the control instructions, determine, based on the historical control data, an intelligent device having the highest probability that the intelligent device operates in response to the control instruction. [0056] The first determining module may include a fifth determining sub-module and a sixth determining sub-module. [0057] The fifth determining sub-module may be configured to determine the intelligent device having the highest probability that the intelligent device operates in response to the control instruction that corresponds to the voice recognition result. [0058] The sixth determining sub-module may be configured to determine the determined intelligent device as the target intelligent device. [0059] According to an exemplary embodiment, the first determining module may include a seventh determining sub-module, an eighth determining sub-module and a ninth determining sub-module. [0060] The seventh determining sub-module may be configured to determine candidate intelligent devices having attribute information matched with the keywords from the intelligent devices. [0061] The eighth determining sub-module may be configured to, for each of the candidate intelligent devices, determine a weight of a keyword matched with the attribute information of the candidate intelligent device. [0062] The ninth determining sub-module may be configured to determine the candidate intelligent device having attribute information matched with the keyword having the largest weight as the target intelligent device. [0063] According to an exemplary embodiment, the first determining module may include a tenth determining sub-module, an eleventh determining sub-module and a twelfth determining sub-module. [0064] The tenth determining sub-module may be configured to determine candidate intelligent devices having attribute information matched with the keywords from the intelligent devices. [0065] The eleventh determination sub-module may be configured to, for each of the candidate intelligent devices, determine the number of keywords matched with the attribute information of the candidate intelligent device. [0066] The twelfth determination sub-module may be configured to determine the candidate intelligent device having attribute information matched with the greatest number of keywords as the target intelligent device. [0067] According to an exemplary embodiment, the keywords may include a first keyword for characterizing an intention and a third keyword for characterizing an application range. [0068] The first determining module may include a thirteenth determining sub-module. [0069] The thirteenth determining sub-module may be configured to determine a target intelligent device having attribute information matched with the first keyword for characterizing the intention from the intelligent devices. [0070] The control module may include a control sub-module. [0071] The control sub-module may be configured to control the target intelligent device to perform an operation indicated by the voice recognition result within the application range. [0072] According to a third aspect of the embodiments of the disclosure, an apparatus for controlling a device is provided, which includes: a processor; and a memory configured to store processor-executable instructions. [0073] The processor is configured to: perform voice recognition on a received sound signal to obtain a voice recognition result; determine one or more keywords using the voice recognition result; determine a target intelligent device having attribute information matched with the one or more keywords from intelligent devices, where relationships between the intelligent devices and attribute information of the intelligent devices are constructed in advance, and the attribute information characterizes an operation provided by the intelligent device corresponding to the attribute information; and control the target intelligent device to perform an operation indicated by the voice recognition result. [0074] In one particular embodiment, the steps of the method for generating a prompt are determined by computer program instructions. [0075] Consequently, according to a fourth aspect, the disclosure is also directed to a computer program for executing the steps of a method for controlling a device as described above when this program is performed by a computer. [0076] This program can use any programming language and take the form of source code, object code or a code intermediate between source code and object code, such as a partially compiled form, or any other desirable form. [0077] The disclosure is also directed to a computer-readable information medium containing instructions of a computer program as described above. [0078] The information medium can be any entity or device capable of storing the program. For example, the support can include storage means such as a ROM, for example a CD ROM or a microelectronic circuit ROM, or magnetic storage means, for example a diskette (floppy disk) or a hard disk. [0079] Alternatively, the information medium can be an integrated circuit in which the program is incorporated, the circuit being adapted to perform the method in question or to be used in its execution. [0080] It should be understood that the above general descriptions and the following detailed descriptions are only exemplary and explanatory, and are not intended to limit the disclosure. The scope of the invention is defined by the claims. BRIEF DESCRIPTION OF DRAWINGS [0081] The accompanying drawings, as a part of the specification, are incorporated into the specification, and are used to illustrate the embodiments conforming to the disclosure, and interpret the principle of the disclosure in conjunction with the specification.Fig. 1 is a flow chart of a method for controlling a device according to an exemplary embodiment. Fig. 2 is a flow chart of a method for controlling a device according to an exemplary embodiment. Fig. 3 is a flow chart of a method for controlling a device according to an exemplary embodiment. Fig. 4 is a flow chart of a method for controlling a device according to an exemplary embodiment. Fig. 5 is a flow chart of a method for controlling a device according to an exemplary embodiment. Fig. 6 is a flow chart of a method for controlling a device according to an exemplary embodiment. Fig. 7 is a flow chart of a method for controlling a device according to an exemplary embodiment. Fig. 8 is a schematic diagram showing an application scenario of a method for controlling a device according to an exemplary embodiment. Fig. 9 is a block diagram of an apparatus for controlling a device according to an exemplary embodiment. Fig. 10 is a block diagram of an apparatus for controlling a device according to an exemplary embodiment. Fig. 11 is a block diagram of an apparatus for controlling a device according to an exemplary embodiment. DETAILED DESCRIPTION [0082] Exemplary embodiments are illustrated in detail here, and examples of the exemplary embodiments are illustrated in the accompanying drawings. In a case that the following description is given with reference to the accompanying drawings, the same numbers in different drawings represent the same or similar elements unless otherwise represented. The implementations described in the following exemplary embodiments do not represent all implementations conforming to the disclosure. Instead, the implementations are merely examples of apparatuses and methods conforming to some aspects of the invention as defined in the appended claims. [0083] Fig. 1 is a flow chart of a method for controlling a device according to an exemplary embodiment. As shown in Fig. 1, the method may be applied to a control device, such as a mobile phone, a tablet computer, an intelligent loudspeaker box, and a control device capable of controlling an intelligent device, which is not limited in the disclosure. A method for controlling a device according to an embodiment of the disclosure includes the following S11 to S14. [0084] At S11, voice recognition is performed on a received sound signal to obtain a voice recognition result. [0085] At S12, keywords are determined using the voice recognition result. [0086] At S13, a target intelligent device having attribute information matched with the keywords is determined from intelligent devices. [0087] Relationships between the intelligent devices and attribute information of the intelligent devices are constructed in advance. The attribute information characterizes an operation provided by the intelligent device corresponding to the attribute device. [0088] At S14, the target intelligent device is controlled to perform an operation indicated by the voice recognition result. [0089] According to the embodiment of the disclosure, the target intelligent device having the attribute information matched with the keywords is determined from the intelligent devices when the sound signal is received, and the target intelligent device is controlled to perform the operation indicated by the voice recognition result, thereby improving the intelligence level of the voice interaction. [0090] The intelligent device may be an intelligent device which can be controlled by a control device, and may also be the control device (that is, if a control instruction is used for the control device, the control device may perform the control instruction for control). For example, the intelligent device may include a variety of intelligent devices which are authorized to be controlled by the control device. In an exemplary application scenario, a user may authorize a control device (such as a mobile phone and an intelligent loudspeaker box) thereof to control multiple intelligent devices. For example, the user authorizes the intelligent loudspeaker box thereof to control a sweeping robot, an air purifier and a bedside lamp. In this application example, relationships between intelligent devices and attribute information of the intelligent devices are constructed in advance. The attribute information characterizes an operation provided by the intelligent device corresponding to the attribute information. The attribute information may be operable attribute information of the intelligent device. That is, the operations provided by the intelligent device all may be taken as the attribute information of the intelligent device. The operation provided by the intelligent device may include an operation (such as a sweeping operation provided by a sweeper) corresponding to a functions and/or a property provided by the intelligent device, and may also include an operation (such as a charging operation provided by the sweeper) or the like of the intelligent device for guaranteeing the normal operation thereof. [0091] Further, the constructed relationships, as a knowledge base, may be stored at the control device (such as a mobile phone and a notebook computer), and may also be stored at a server. The control device may acquire information on the relationships from the server corresponding to the control device when requiring the relationships. [0092] For example, relationships between sweeping robots and attribute information of the sweeping robots are constructed in advance by the intelligent loudspeaker box of the user, and the attributes (i.e., the provided operations) of the sweeping robot may include but be not limited to sweeping, speed adjustment, electric quantity information query and the like. Relationships between the air purifiers and attribute information of the air purifiers are constructed in advance by the intelligent loudspeaker box of the user, and the attribute information (i.e., the provided operations) of the air purifier may include but be not limited to: air pm2.5 query, air humidity query and purification mode switch and the like. Relationships between the bedside lamps and attribute information of the beside lamps are constructed in advance by the intelligent loudspeaker box of the user, the attribute information (i.e., the provided operations) of the bedside lamp may include but be not limited to a turn-on operation, a turn-off operation and luminance adjustment and the like. [0093] It should be noted that the same piece of attribute information of the intelligent device may be expressed in multiple expression manners, and the multiple expression manners are used to express the same piece of attribute information. With taking the sweeping robot as an example, sweeping, cleaning, clearing and the like may belong to the attribute information of sweeping. How to parse multiple pieces of attribute information in multiple expression manners into attribute information corresponding to the multiple pieces of attribute information is not limited in this embodiment of the disclosure. [0094] In this application example, the user may control one or more intelligent devices by uttering a voice. For example, a voice control instruction uttered by the user is "sweep the bedroom". Upon the reception of a sound signal, the intelligent loudspeaker box of the user may perform voice recognition on the sound signal to obtain a voice recognition result. The intelligent loudspeaker box of the user may determine keywords (sweep and bedroom) using the voice recognition result. The intelligent loudspeaker box of the user may determine, according to the keywords (sweep and bedroom), a target intelligent device having attribute information matched with the two keywords (sweep and bedroom) from the sweeping robot, the air purifier and the bedside lamp. In the relationships between intelligent devices and attribute information of the intelligent devices constructed in advance by the intelligent loudspeaker box of the user, the attribute information of the sweeping robot includes "sweep", and the attribute information of other intelligent devices than the sweeping robot does not include "sweep". The intelligent loudspeaker box of the user may determine that a to-be-controlled target intelligent device in the sweeping robot, the air purifier and the bedside lamp controlled by the intelligent loudspeaker box is the sweeping robot. The intelligent loudspeaker box of the user may control the sweeping robot to perform the operation indicated by the voice recognition result, for example, the intelligent loudspeaker box of the user controls the sweeping robot to sweep the bedroom. [0095] The control device may directly send a control instruction to the target intelligent device, to control the target intelligent device to perform the operation indicated by the voice recognition result. Alternatively, the control device may also send a control instruction to a server, and the server sends the control instruction to the target intelligent device, to control the target intelligent device to perform the operation indicated by the voice recognition result. [0096] In this way, even though no explicit execution main body is included in the control instruction sent by the user in the voice form, and the meaning of the control instruction is unclear, the control device can accurately recognize a control intention of the user to improve the intelligence level of the voice interaction. [0097] In a possible implementation, the control device may be a mobile phone, a tablet computer, an intelligent loudspeaker box and the like. For convenience of explanation, the mobile phone is taken as an example of the control device below. [0098] As shown in Fig. 1, in S11, voice recognition is performed on a received sound signal to obtain a voice recognition result. [0099] For example, a user sends a voice control instruction, and a mobile phone of the user may perform voice recognition on the received sound signal to obtain a voice recognition result. For example, the voice recognition is performed on the received sound signal with automatic speech recognition (ASR) technology. For example, a text result corresponding to the sound signal is acquired. A manner of performing the voice recognition on the received sound signal to obtain the voice recognition result, a form of the voice recognition result and the like are not limited in the disclosure. [0100] As shown in Fig. 1, in S12, keywords are determined using the voice recognition result. [0101] For example, the mobile phone of the user may determine the keywords using the voice recognition result. For example, word segmentation processing may be performed on the voice recognition result (for example, the text result) to obtain the keywords using the voice recognition result. For example, the voice recognition result is "how is the indoor air quality". The mobile phone of the user may perform word segmentation on the voice recognition result. For example, the mobile phone may segment the voice recognition result into three words of "indoor", "air quality" and "how", and determines the three words as the keywords using the voice recognition result. The manner for determining the keywords using the voice recognition result, the number of keywords, the form and the type of the keywords and the like are not limited in the disclosure. [0102] In a possible implementation, the keywords may include one or more of a first keyword for characterizing an intention, a third keyword for characterizing an application range and a second keyword for characterizing basic information. [0103] For example, a voice control instruction sent by the user may include a word relevant to an operation intention, and the mobile phone of the user may determine and acquire the keyword for characterizing intention. For example, the voice control instruction is an instruction "sweep the bedroom". The keyword "sweep" is a keyword for characterizing intention. The voice control instruction sent by the user may include a word relevant to an application range of an operation, i.e., an execution range of the control instruction. The mobile phone of the user may determine and acquire the third keyword for characterizing an application range. For example, the voice control instruction is an instruction "sweep the bedroom". The keyword "bedroom" is a third keyword for characterizing an application range. [0104] In a possible implementation, the basic information of the intelligent device may include one or more of a name, an identifier, a type, a position, a characteristic and the like of the intelligent device. [0105] The intelligent device may be any device capable of being authorized to be controlled by the control device, and for example may include but be not limited to at least one of a sweeping robot, an air purifier, a lamp, an intelligent switch, an intelligent curtain, a washing machine, a television, an air conditioner, an intelligent socket and a loudspeaker box, which is not limited in the disclosure. [0106] The name of the intelligent device may be a generic name of the intelligent device such as the sweeping robot, and may also be a name such as a nickname set by the user for the intelligent device. The identifier of the intelligent device may be recognition information of the intelligent device, which may be set by the user for the intelligent device and may also be the own recognition information of the intelligent device. The type of the intelligent device may be a type to which the intelligent device belongs. For example, the sweeping robot may be of a robot type and the like. The position of the intelligent device may be a position where the intelligent device is placed, which may be a relative position such as a bedroom and may also be an absolute position such as longitude and latitude. The characteristic of the intelligent device may be characteristic information of the intelligent device, which may be set by the user, such as a color, a shape and the like. [0107] For example, the voice control instruction sent by the user may include basic information of the intelligent device, and the mobile phone of the user may determine and acquire the second keyword for characterizing basic information. For example, the voice control instruction is an instruction "the sweeping robot sweeps the bedroom". The keyword "sweeping robot" may be the second keyword for characterizing basic information. The form and content of the basic information, the type of the keyword and the like are not limited in the disclosure. [0108] As shown in Fig. 1, in S13, a target intelligent device having attribute information matched with the keywords is determined from intelligent devices. [0109] Relationships between the intelligent devices and attribute information of the intelligent devices are constructed in advance, and the attribute information characterizes an operation provided by the intelligent device corresponding to the attribute information. [0110] For example, the intelligent device may provide an operation, and the attribute information of the intelligent device may characterize the operation provided by the intelligent device. For example, the operation provided by the sweeping robot may include sweeping, and the attribute information of the sweeping robot may include "sweeping". The relationships between the intelligent devices and the attribute information of the intelligent devices are constructed in advance. For example, the intelligent devices include a sweeping robot, an air purifier and a bedside lamp. The relationships between the above three intelligent devices and the attribute information of the three intelligent devices may be constructed in advance. The operation provided by each of the intelligent devices may include various forms and contents. The forms and contents of the attribute information are not limited in the present invention. [0111] The relationships between the intelligent devices and the attribute information of the intelligent devices may be constructed in multiple forms. For example, when the user authorizes the control device to control an intelligent device, attribute information of the intelligent device is determined based on information (such as a device type and a content in a specification corresponding to the device) on the intelligent device, and a relationships between the intelligent device and the attribute information of the intelligent device is constructed. A determination manner, a determination time and the like of the relationships between the intelligent devices and the attribute information of the intelligent devices constructed in advance are not limited in the disclosure. [0112] The operation (S13) that a target intelligent device having attribute information matched with the keywords is determined from intelligent devices may be implemented in multiple manners. For example, a target intelligent device to be controlled may be determined from the intelligent devices according to a matching condition between the determined keywords and attribute information corresponding to each of the intelligent devices. [0113] For example, a matching degree between the first keyword for characterizing the intention and the attribute information of each of the intelligent devices may be determined, and an intelligent device having the highest matching degree is determined as the target intelligent device. For example, the voice control instruction sent by the user is an instruction "sweep the bedroom". The determined first keyword for characterizing the intention includes "sweep". For each of the intelligent devices, a matching degree between the keyword "sweep" and the attribute information corresponding to the intelligent device is determined. For example, the attribute information of only the sweeping robot in multiple intelligent devices includes "sweep", and thus the sweeping robot may be determined as the target intelligent device. In this way, the target intelligent device may be determined quickly according to the first keyword for characterizing the intention and the attribute information of each of the intelligent devices. [0114] In a possible implementation, candidate intelligent devices may also be determined according to the determined keywords and the attribute information of each of the intelligent devices, and the target intelligent device is determined from the candidate intelligent devices. For example, the intelligent devices having the attribute information including one or more determined keywords may be determined as candidate intelligent devices. The target intelligent device is determined from the candidate intelligent devices. For example, according to an execution condition of each of candidate intelligent devices within a reference time period, an intelligent device which has performed within the reference time period may be determined as the target intelligent device. Alternatively, for each of the candidate intelligent devices, a weight value of a keyword matched with attribute information which corresponds to the candidate intelligent device may be determined, and the candidate intelligent device having the attribute information matched with the keyword having the largest weight value is determined as the target intelligent device. The rules for determining the candidate intelligent devices and determining the target intelligent device from the candidate intelligent devices and the like are not limited in the disclosure. [0115] Therefore, in a case that there are a large number of intelligent devices and a matching difficulty is high, the target intelligent device may be determined flexibly and accurately. The manner and rule for determining the target intelligent device having the attribute information matched with the keywords are not limited in the disclosure. [0116] As shown in Fig. 1, in S14, the target intelligent device is controlled to perform an operation indicated by the voice recognition result. [0117] For example, the mobile phone of the user may control the target intelligent device to perform an operation indicated by the voice recognition result. For example, if the target intelligent device is determined as the sweeping robot, the sweeping robot is controlled to perform the operation such as sweeping the bedroom indicated by the voice recognition result. The mobile phone of the user may further send the determined control instruction that the sweeping robot cleans the bedroom to a server, and the server sends the control instruction to the sweeping robot. Alternatively, the mobile phone of the user may directly send the control instruction to the sweeping robot (for example, the mobile phone of the user is authorized to control the sweeping robot to perform the sweeping operation) to enable the sweeping robot to perform the operation indicated by the voice recognition result. The manner of controlling the target intelligent device to perform the operation indicated by the voice recognition result is not limited in the disclosure. [0118] Fig. 2 is a flow chart of a method for controlling a device according to an exemplary embodiment. In a possible implementation, as shown in Fig. 2, the operation (S13) that the target intelligent device having attribute information matched with the keywords is determined from intelligent devices may include S1301 to S1303. [0119] At 1301, the first keyword for characterizing an intention is matched with attribute information corresponding to each of the intelligent devices. [0120] At S1302, in response to attribute information of at least two intelligent devices being matched successfully with the first keyword for characterizing an intention, the second keyword for characterizing basic information is matched with basic information of the at least two intelligent devices. An intelligent device having basic information successfully matched with the second keyword for characterizing basic information is determined as the target intelligent device. [0121] At S1303, in response to attribute information of only one intelligent device being successfully matched with the first keyword for characterizing an intention, the second keyword for characterizing basic information is matched with basic information of the intelligent device. If the second keyword for characterizing basic information is matched with the basic information of the intelligent device successfully, the intelligent device is determined as the target intelligent device. [0122] As mentioned above, the basic information of the intelligent device may include one or more of a name, an identifier, a type, a position and a characteristic of the intelligent device. The keywords include a first keyword for characterizing an intention and a second keyword for characterizing basic information, which are not repeated here anymore. [0123] In a possible implementation, relationships between the intelligent devices and the basic information of the intelligent devices may be constructed in advance. For example, the basic information of the sweeping robot may include the name of a small hard worker, the type of a robot and the position of a living room and a bedroom. The relationships between the sweeping robot and the basic information of the sweeping robot may be constructed in advance. [0124] When the user authorizes the control device to control an intelligent device, the basic information of the intelligent device may be determined according to information (such as a device model, a content and initial setting information in the description corresponding to the device) of the intelligent device, and relationships between the intelligent device and the basic information of the intelligent device is constructed. A determination manner, a determination time and the like of the relationships between the intelligent devices and the basic information of the intelligent devices constructed in advance are not limited in the disclosure. [0125] For example, the user utters a voice control instruction "please help me to query a pm2.5 value in the bedroom". The mobile phone of the user determines that the first keyword for characterizing an intention may include "pm2.5 value query". The mobile phone of the user may match the first keyword for characterizing the intention with attribute information corresponding to each of intelligent devices. For example, the intelligent devices that can be controlled by the mobile phone of the user include an air purifier and an intelligent loudspeaker box (for example, the attribute information corresponding to each of the air purifier and the intelligent loudspeaker box include "pm2.5 value query"). The mobile phone of the user may determine that attribute information of two intelligent devices is matched successfully with the first keyword for characterizing the intention. [0126] In some optional embodiments, in response to attribute information of at least two intelligent devices being matched successfully with the first keyword for characterizing the intention, the second keyword for characterizing basic information is matched with basic information of the at least two intelligent devices, and an intelligent device having the basic information successfully matched with the first keyword for characterizing the intention is determined as the target intelligent device. [0127] For example, as mentioned above, in response to the attribute information of at least two intelligent devices (such as the air purifier and the intelligent loudspeaker box) being matched successfully with the first keyword for characterizing the intention, the second keyword for characterizing basic information determined by the mobile phone of the user may include a keyword "bedroom". The mobile phone of the user may match the keyword "bedroom" for characterizing the basic information with the basic information of the air purifier and the basic information of the intelligent loudspeaker box. For example, the basic information of the air purifier includes "bedroom" while the basic information of the intelligent loudspeaker box does not include "bedroom". The mobile phone of the user may determine the air purifier as an intelligent device matched successfully with the keyword "bedroom", and may determine the air purifier as the target intelligent device. [0128] In this way, in a case that multiple intelligent devices having attribute information successfully matched with the first keyword for characterizing the intention are determined and acquired according to the first keyword for characterizing the intention and the attribute information of each of the intelligent devices, the target intelligent device may be determined accurately and efficiently in combination with the second keyword for characterizing basic information and the basic information of the above intelligent devices having attribute information successfully matched with the first keyword for characterizing the intention. A manner of matching the first keyword for characterizing the intention with the attribute information corresponding to each of the intelligent devices and a manner of matching the second keyword for characterizing basic information with the basic information of the at least two intelligent devices are not limited in the disclosure. [0129] In some optional embodiments, in response to attribute information of one intelligent device being matched successfully with the first keyword for characterizing the intention, the second keyword for characterizing basic information is matched with basic information of the one intelligent device. If the second keyword for characterizing basic information is matched with the basic information of the intelligent device successfully, the intelligent device is determined as the target intelligent device. [0130] For example, the user utters a voice control instruction "please help me to query a pm 2.5 value in the bedroom". A first keyword for characterizing an intention determined by the mobile phone of the user may include a keyword "pm2.5 value query". A second keyword for characterizing basic information may include a keyword "bedroom". The mobile phone of the user may match the first keyword for characterizing the intention with attribute information corresponding to each of the intelligent devices. For example, the mobile phone of the user may determine that an air purifier in the intelligent devices controlled by the mobile phone has attribute information matched successfully with the first keyword for characterizing the intention, and the mobile phone of the user may match the keyword (bedroom) for characterizing the basic information with the basic information of the air purifier. For example, if the basic information of the air purifier includes "bedroom", the keyword (bedroom) for characterizing the basic information is matched successfully with the basic information of the air purifier, and the air purifier may be determined as the target intelligent device. [0131] In this way, in response to one intelligent device having attribute information matched successfully with the first keyword for characterizing the intention being determined and acquired according to the first keyword for characterizing the intention and the attribute information of each of the intelligent devices, the target intelligent device may be determined more accurately in combination with the second keyword for characterizing basic information and the basic information of the above intelligent device having attribute information matched successfully with the first keyword for characterizing the intention. A manner of matching the first keyword for characterizing the intention with the attribute information corresponding to each of intelligent devices and a manner of matching the second keyword for characterizing basic information with the basic information of the intelligent device are not limited by the disclosure. [0132] Fig. 3 is a flow chart of a method for controlling a device according to an exemplary embodiment. In a possible implementation, as shown in Fig. 3, the operation (S13) that the target intelligent device having attribute information matched with the keyword is determined from intelligent devices may include S1304 to S1305. [0133] At S1304, candidate intelligent devices having attribute information matched with the keywords are determined from the intelligent devices. [0134] At S1305, in response to there being an intelligent device out of the candidate intelligent devices which has performed an operation in a preset reference time period, the intelligent device is determined as the target intelligent device. [0135] The reference time period may be a time interval set in advance, e.g., a preset time period (1 minute, 5 minutes and the like) before a preset time moment, or a preset time period (1 minute, 5 minutes, and the like) before a current time moment. The operation performed within the preset reference time period may be controlled by the user in various manners. For example, the user may manually perform the operation or control the operation through a voice control instruction. The operation may be any operation capable of being performed by the intelligent device. A duration of the reference time period, a manner for executing the operation and the like are not limited in the disclosure. [0136] For example, the mobile phone of the user may determine one or more intelligent devices having attribute information matched with the keyword, and determine the one or more intelligent devices as the candidate intelligent devices. For example, intelligent devices having attribute information including at least one keyword may be determined as the candidate intelligent device. Alternatively, a matching degree between attribute information of each of the intelligent devices and the keyword may be further determined, and the candidate intelligent devices are determined according to the matching degrees. For example, the intelligent device having the matching degree greater than or equal to a matching threshold is determined as the candidate intelligent device. A manner of determining the candidate intelligent devices having the attribute information matched with the keywords, a manner for determining the matching degree, a value and a set manner of the matching threshold and the like are not limited in the disclosure. [0137] The candidate intelligent devices may include multiple devices of the same type (such as multiple bedside lamps distributed in different bedrooms), and may also include multiple devices of different types (such as bedside lamps and televisions). For example, the mobile phone of the user receives a voice control instruction (adjust to a higher luminance) at 20:01. The mobile phone of the user may determine candidate intelligent devices from multiple intelligent devices (such as the bedside lamp, the sweeping root, the television). For example, the determined candidate intelligent devices include the bedside lamp and the television (for example, attribute information of each of the bedside lamp and the television includes "adjust to a higher luminance"). The mobile phone of the user determines that the bedside lamp is turned on at 20:00 (for example, the user turns on the bedside lamp manually). The mobile phone of the user may determine the bedside lamp in the multiple candidate intelligent devices (such as the bedside lamp and the television) that has performed an operation within the reference time period (for example, the reference time period is 1 min) as the target intelligent device. [0138] In some optional embodiments, the determined candidate intelligent devices are bedside lamps distributed in different bedrooms. In combination with that the candidate intelligent device that has performed an operation within the reference time period is determined to be a bedside lamp in a master bedroom, the mobile phone of the user may determine the bedside lamp in the master bedroom as the target intelligent device. [0139] In such way, the target intelligent device may be determined accurately, thereby improving the accuracy and an intelligence level of the voice interaction. [0140] Fig. 4 is a flow chart of a method for controlling a device according to an exemplary embodiment. In a possible implementation, as shown in Fig. 4, the method further includes S15 and S16. [0141] At S15, historical control data is acquired. The historical control data include relationships between control instructions determined historically and intelligent devices operating in response to the control instructions, and the control instruction is any one of forms including: the voice recognition result, and at least one keyword included in the voice recognition result. [0142] At S16, for each of the control instructions, an intelligent device having the highest probability that the intelligent device operates in response to the control instruction is determined according to the historical control data. [0143] The operation (S13) that a target intelligent device having attribute information matched with the keywords is determined from intelligent devices may include S1306 and S1307. [0144] At S1306, an intelligent device having the highest probability that the intelligent device operates in response to a control instruction that corresponds to the voice recognition result is determined. [0145] At S1307, the determined intelligent device is determined as the target intelligent device. [0146] It should be noted that operations S15 and S16 may be executed before operations S11 and S12, and may also be executed after operations S11 and S12, as long as operations S15 and S16 are executed before S1306, which is not limited in the disclosure. [0147] A relationship between each control instruction sent historically and an intelligent device that is controlled to operate in response to the control instruction is recorded and stored, and the historical control data may include relationships between historical control instructions and intelligent devices operating in response to the historical control instructions. [0148] Further, the control instruction may be determined according to the voice recognition result. The control instruction may be in various forms. The voice recognition result may be directly taken as the control instruction, and at least one keyword recognized from the voice recognition result is taken as the control instruction. For a case where at least one keyword is taken as the control instruction, the first keyword for characterizing the intention may be taken as the control instruction. It is assumed that the voice recognition result is "the sweeper sweeps the bedroom", and the voice recognition result is used to control the sweeper to sweep, the recognition result may be taken as the control instruction to be stored in response to the sweeper. Alternatively, at least one keyword (for example, one keyword "sweep", and two keywords "sweeper sweeps" or "sweep bedroom") included in the voice recognition result is taken as the control instruction to be stored in response to the sweeper. [0149] The control device may send the above stored data to the server for data analysis, and may also directly perform data analysis on the above data to determine the relationships between control instructions and the intelligent device operating in response to the control instructions. The historical control data containing the relationships may be stored in the control device, or may be stored in the server. The form and content of the historical control data, a determination manner, a storage manner and an acquisition manner of the historical control data and the like are not limited in the disclosure. [0150] For example, after the user authorizes the mobile phone to control the sweeping robot, the air purifier and the bedside lamp, the user may control these intelligent devices frequently via voice interaction in daily life. For example, after the user authorizes the mobile phone to control the intelligent devices, the user frequently sends a voice control instruction "adjust a desk lamp to higher luminance". The mobile phone of the user obtains a historical voice recognition result (for example, "adjust the bedside lamp to higher luminance") by performing voice recognition on a sound signal. The keyword acquired from the historical voice recognition result may include a keyword "adjust to higher luminance" (the first keyword for characterizing the intention). The mobile phone of the user may further determine that a target intelligent device corresponding to the keyword is the bedside lamp and thus determine a relationship between the keyword "adjust to higher luminance" and the bedside lamp. The mobile phone of the user may send the determined relationship to the server or store it locally. [0151] In a possible implementation, for each of the control instructions, an intelligent device having the highest probability that the intelligent device operates in response to the control instruction may be determined according to the historical control data. [0152] For example, the mobile phone of the user or the server may store the acquired historical control data, process the historical control data, and make a statistic and analyze an intelligent device having the highest probability that the intelligent device operates in response to each control instruction contained in the historical control data. For example, the number of keywords "adjust to higher luminance" in the historical voice recognition result in the historical control data is ten, nine keywords "adjust to higher luminance" correspond to the bedside lamp and one keyword "adjust to higher luminance" corresponds to the television (it may be determined that a frequency that the keyword "adjust to higher luminance" corresponds to the bedside lamp is higher than a frequency that the keyword corresponds to the television). The intelligent device (such as the bedside lamp) having the highest probability that the intelligent device operates in response to the keyword (adjust to higher luminance) may be determined based on the historical control data. [0153] Therefore, for each of the control instructions, the intelligent device having the highest probability that the intelligent device operates in response to the control instruction can be determined accurately and comprehensively, an operation habit of the user can be met to a great extent, thereby continuously learning according to operation data of the user and thus perfecting the intelligence level. A manner of determining the intelligent device having the highest probability that the intelligent device operates in response to each keyword is not limited in the disclosure. [0154] In a possible implementation, the intelligent device having the highest probability that the intelligent device operates in response to the keyword may be determined, and the determined intelligent device is determined as the target intelligent device. [0155] For example, the mobile phone of the user may perform voice recognition on the received sound signal to obtain a voice recognition result (for example, "query a pm 2.5 value"). The mobile phone of the user may acquire the historical control data. It is assumed that the mobile phone of the user may acquire a relationship between the control instruction "query the pm 2.5 value" and the intelligent device stored historically from the server. It is also assumed that the number of the control instructions received historically by the mobile phone of the user is 20, and the intelligent device corresponding to each of nineteen control instructions of the twenty control instructions is the air purifier, and the intelligent device corresponding to one control instruction is the intelligent loudspeaker box. The mobile phone of the user may determine, according to the historical control data, an intelligent device having the highest probability that the intelligent device operates in response to the control instruction as the air purifier. The intelligent device having the highest probability that the intelligent device operates in response to the present keyword may be determined to be the air purifier, and the air purifier is determined as the target intelligent device. [0156] Further, the present control instruction may be not consistent with the stored historical control instruction, and a historical control instruction having the highest matching degree of matching with the present control instruction may be determined as the historical control instruction corresponding to the present control instruction, and an intelligent device having the highest probability that the intelligent device operates in response to the historical control instruction is determined as an intelligent device corresponding to the present control instruction. The matching manner is not repeated herein. [0157] Therefore, the target intelligent device to be controlled may be determined quickly and accurately in combination with the historical control data, thereby improving the intelligence level of the voice interaction in a case of limited control information contained in the voice control instruction. [0158] Fig. 5 is a flow chart of a method for controlling a device according to an exemplary embodiment. In a possible implementation, as shown in Fig. 5, the operation (S13) that a target intelligent device having attribute information matched with the keywords is determined from the intelligent devices may include S1308 to S1310. [0159] At S1308, candidate intelligent devices having attribute information matched with the keywords are determined from the intelligent devices. [0160] At S1309, for each of the candidate intelligent devices, a weight of a keyword matched with the attribute information of the candidate intelligent device is determined. [0161] At S1310, the candidate intelligent device having the attribute information matched with a keyword having the largest weight is determined as the target intelligent device. [0162] A manner of determining the candidate intelligent devices having attribute information matched with the keywords is as mentioned above and is not repeated herein. [0163] In this embodiment, different calculation weights may be set for the keywords, and different calculation weights may be set according to characteristics such as word properties of the keywords. For example, a high weight may be set for a verb. Alternatively, different calculation weights are set according to types of the keywords. For example, a high weight may be set for the first keyword for characterizing an intention. For each of the candidate intelligent devices, a weight of a keyword matched with the attribute information of the candidate intelligent device may be determined, and the candidate intelligent device having attribute information matched with a keyword having the largest weight is determined as the target intelligent device. [0164] For example, the mobile phone of the user performs recognition on a received sound signal to obtain a voice recognition result, and determines keywords using the voice recognition result, which include "sweep" and "mode". The keyword, a verb "sweep", is provided with a high weight such as 60%, and the keyword, a noun "mode", is provided with a small weight such as 40%. It is assumed that candidate intelligent devices having attribute information matched with the keywords may include a sweeping robot (for example, the attribute information of the sweeping robot is matched with the keyword "sweep") and an air purifier (for example, the attribute information of the air purifier is matched with the keyword "mode"). The mobile phone of the user may determine the sweeping robot having attribute information matched with the keyword (for example, sweep) having the largest weight as the target intelligent device. [0165] In such a way, the target intelligent device may be determined from the candidate intelligent devices quickly and accurately. A manner of setting the weight of each keyword is not limited in the disclosure. [0166] Fig. 6 is a flow chart of a method for controlling a device according to an exemplary embodiment. In a possible implementation, as shown in Fig. 6, the operation (S13) that a target intelligent device having attribute information matched with the keywords is determined from intelligent devices may include S1311 to S1313. [0167] At S1311, candidate intelligent devices having attribute information matched with the keywords are determined from the intelligent devices. [0168] At S1312, for each of the candidate intelligent devices, the number of keywords matched with attribute information of the candidate intelligent device is determined. [0169] At S1313, the candidate intelligent device having the attribute information which is matched with the greatest number of keywords is determined as the target intelligent device. [0170] A manner of determining the candidate intelligent devices having attribute information matched with the keywords is as mentioned above and will not be repeated herein. [0171] For example, the mobile phone of the user performs recognition on a received sound signal to obtain a voice recognition result, and determines keywords using the voice recognition result and determines multiple candidate intelligent devices according to the keywords. For example, an intelligent device having attribute information containing at least one keyword may be determined as the candidate intelligent device. The mobile phone of the user may determine the number of keywords matched with the attribute information of each candidate intelligent device, and determine the candidate intelligent device having the attribute information which is matched with the greatest number of keywords as the target intelligent device. [0172] For example, in a case where attribute information corresponding to a candidate intelligent device is matched with all of the keywords, and attribute information corresponding to other candidate intelligent devices is matched with a part of keywords, the candidate intelligent device having the attribute information which is matched with the greatest number of keywords (for example, the attribute information is matched with all keywords) may be determined as the target intelligent device. Alternatively, in a case where multiple keywords are determined and acquired, and attribute information corresponding to each of the candidate intelligent devices is matched with a part of keywords, the candidate intelligent device having the attribute information which is matched with the greatest number of keywords may be determined as the target intelligent device. [0173] In such a manner, the target intelligent device may be determined from the candidate intelligent devices quickly and accurately. A manner and an applicable rule of determining the target intelligent device from the intelligent devices are not limited in the disclosure. [0174] Fig. 7 is a flow chart of a method for controlling a device according to an exemplary embodiment. In a possible implementation, as shown in Fig. 7, the operation (S13) that a target intelligent device having attribute information matched with the keywords is determined from intelligent devices may include S1314. [0175] At S1314, a target intelligent device having attribute information matched with the first keyword for characterizing the intention is determined from the intelligent devices. [0176] As shown in Fig. 7, the operation (S14) that the target intelligent device is controlled to perform an operation indicated by the voice recognition result may include S1401. [0177] At S1401, the target intelligent device is controlled to perform an operation indicated by the voice recognition result within the application range. [0178] As mentioned above, the third keyword for characterizing the application range may be included, and the target intelligent device having the attribute information matched with the first keyword for characterizing the intention may be determined, which is not repeated herein. [0179] For example, the mobile phone of the user performs recognition on a received sound signal to obtain a voice recognition result, and determines the keywords using the voice recognition result which include, for example, a keyword "sweep" for characterizing the intention and a keyword "master bedroom" for characterizing the application range. The mobile phone of the user may determine the target intelligent device according to the keyword "sweep" for characterizing the intention and the attribute information corresponding to each intelligent device. For example, in a case where "sweep" included in the attribute information corresponding to the sweeping robot is matched with the keyword "sweep" for characterizing the intention, the target intelligent device having the attribute information matched with the first keyword for characterizing the intention is determined to be the sweeping robot. The mobile phone of the user may control the target intelligent device to perform an operation indicated by the voice recognition result within the application range. For example, in a case that that the keyword "master bedroom" for characterizing the application range is included, the mobile phone of the user may control the sweeping robot to perform a sweeping operation within the range of the master bedroom. [0180] Therefore, an intelligence level for controlling the target intelligent device can be improved, and the operation indicated by the voice recognition result is performed within the application range, thereby ensuring the operation efficiency of the intelligent device. Application Example [0181] Hereinafter, an application example according to the embodiment of the disclosure is given in combination with an exemplary application scenario in which a user controls an intelligent device, to facilitate understanding for the flow of the method for controlling the device. It should be understood by those skilled in the art that the following application example is merely for the purpose of facilitating understanding for the embodiments of the disclosure and should be not regarded as a limit to the embodiments of the invention, the scope of which is defined by the claims. [0182] Fig. 8 is a schematic diagram showing an application scenario of a method for controlling a device according to an exemplary embodiment. In this application example, a user authorizes a mobile phone to control a sweeping robot, an air purifier and the like (intelligent devices). In this application example, the mobile phone of the user may determine and store attribute information of the sweeping robot and the air purifier. For example, the attribute information of the sweeping robot may include sweeping, charging, a speed and the like. The attribute information of the air purifier may include a mode, air humidity and the like. [0183] In this application example, the user utters a voice control instruction to the mobile phone in a room, such as "how much the air humidity is". The mobile phone of the user performs voice recognition on a received sound signal to obtain a voice recognition result (for example, the voice recognition result is a text content "how much the air humidity is"). The mobile phone of the user determines keywords using the voice recognition result. For example, the keywords include "air humidity" and "how much". The mobile phone of the user determines a target intelligent device having attribute information matched with the keywords from the intelligent devices according to the keywords and attribute information of the intelligent devices. For example, in a case where it is determined according to the stored attribute information of each intelligent device that the attribute information corresponding to the air purifier includes information "air humidity", the mobile phone of the user may determine the air purifier as the target intelligent device to be controlled. [0184] In this application example, the mobile phone of the user may control the air purifier to perform the operation indicated by the voice recognition result. For example, as shown in Fig. 8, the mobile phone of the user may control an air purifier disposed in another room to be started and determine the air humidity. [0185] Fig. 9 is a block diagram of an apparatus for controlling a device according to an exemplary embodiment. Referring to Fig. 9, the apparatus includes a recognition result acquiring module 21, a keyword determining module 22, a first determining module 23 and a control module 24. [0186] The recognition result acquiring module 21 is configured to perform recognition on a received sound signal to obtain a voice recognition result. [0187] The keyword determining module 22 is configured to determine keywords using the voice recognition result. [0188] The first determining module 23 is configured to determine a target intelligent device having attribute information matched with the keywords from intelligent devices. [0189] Relationships between the intelligent devices and attribute information of the intelligent devices are constructed in advance, and the attribute information characterizes an operation provided by the intelligent device corresponding to the attribute information. [0190] The control module 24 is configured to control the target intelligent device to perform an operation indicated by the voice recognition result. [0191] Fig. 10 is a block diagram of an apparatus for controlling a device according to an exemplary embodiment. Referring to Fig. 10, in a possible implementation, relationships between the intelligent devices and basic information of the intelligent devices are constructed in advance. The basic information includes one or more of the following information: a name, an identifier, a type, a position and a characteristic of the intelligent device. The keywords include a first keyword for characterizing an intention and a second keyword for characterizing basic information. [0192] The first determining module 23 includes a first information matching sub-module 2301, a first determining sub-module 2302 and a second determining sub-module 2303. [0193] The first information matching sub-module 2301 is configured to match a first keyword for characterizing the intention with attribute information corresponding to each of the intelligent devices. [0194] The first determining sub-module 2302 is configured to match, in response to attribute information of at least two intelligent devices being matched successfully with the keyword for charactering the intention, a second keyword for characterizing basic information with basic information of the at least two intelligent devices, and determine an intelligent device having the basic information matched successfully with the second keyword for characterizing basic information as the target intelligent device. [0195] The second determining sub-module 2303 is configured to match, in response to attribute information of one intelligent device being matched successfully with the keyword for charactering the intention, a second keyword for characterizing basic information with basic information of the intelligent device, and determine, in response to the second keyword for characterizing basic information being matched with the basic information of the intelligent device successfully, the intelligent device as the target intelligent device. [0196] Referring to Fig. 10, in a possible implementation, the first determining module 23 includes a third determining sub-module 2304 and a fourth determining sub-module 2305. [0197] The third determining sub-module 2304 is configured to determine candidate intelligent devices having attribute information matched with the keywords from intelligent devices. [0198] The fourth determining sub-module 2305 is configured to determine, in response to there being an intelligent device out of the candidate intelligent devices which has performed an operation in a preset reference time period, the intelligent device as the target intelligent device. [0199] Referring to Fig. 10, in a possible implementation, the apparatus further includes a data acquiring module 25 and a second determining module 26. [0200] The data acquiring module 25 is configured to acquire historical control data. The historical control data includes relationships between control instructions determined historically and intelligent devices which operate in response to the control instructions. The control instruction is any one of the voice recognition result, and at least one keyword included in the voice recognition result. [0201] The second determining module 26 is configured to, for each of the control instructions, determine, according to the historical control data, an intelligent device having the highest probability that the intelligent device operates in response to the control instruction. [0202] The first determining module 23 includes a fifth determining sub-module 2306 and a sixth determining sub-module 2307. [0203] The fifth determining sub-module 2306 is configured to determine an intelligent device having the highest probability that the intelligent device operates in response to a control instruction that corresponds to the voice recognition result. [0204] The sixth determination sub-module 2307 is configured to determine the determined intelligent device as the target intelligent device. [0205] Referring to Fig. 10, in a possible implementation, the first determining module 23 includes a seventh determining sub-module 2308, an eighth determining sub-module 2309 and a ninth determining sub-module 2310. [0206] The seventh determination sub-module 2308 is configured to determine candidate intelligent devices having attribute information matched with the keywords from intelligent devices. [0207] The eighth determination sub-module 2309 is configured to, for each of the candidate intelligent devices, determine a weight of a keyword matched with attribute information of the candidate intelligent device. [0208] The ninth determination sub-module 2310 is configured to determine a candidate intelligent device having attribute information matched with a keyword having the largest weight as the target intelligent device. [0209] Referring to Fig. 10, in a possible implementation, the first determining module 23 includes a tenth determining sub-module 2311, an eleventh determining sub-module 2312 and a twelfth determining sub-module 2313. [0210] The tenth determining sub-module 2311 is configured to determine candidate intelligent devices having attribute information matched with the keywords from intelligent devices. [0211] The eleventh determining sub-module 2312 is configured to, for each of the candidate intelligent devices, determine the number of keywords matched with attribute information of the candidate intelligent device. [0212] The twelfth determining sub-module 2313 is configured to determine the candidate intelligent device having the attribute information which is matched with the greatest number of keywords as the target intelligent device. [0213] Referring to Fig. 10, in a possible implementation, the keywords include a first keyword for characterizing an intention and a third keyword for characterizing an application range. [0214] The first determining module 23 includes a thirteenth determining sub-module 2314. [0215] The thirteenth determining sub-module 2314 is configured to determine a target intelligent device having attribute information matched with the first keyword for characterizing the intention from the intelligent devices. [0216] The control module 24 includes a control sub-module 2401. [0217] The control sub-module 2401 is configured to control the target intelligent device to perform an operation indicated by the voice recognition result within the application range. [0218] Regarding the devices in the above embodiments, the implementations for performing operations by individual modules have been described in detail in the method embodiments, which are not elaborated herein. [0219] Fig. 11 is a block diagram of an apparatus for controlling a device according to an exemplary embodiment. For example, the apparatus 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a gaming console, a tablet, a medical device, fitness equipment, a personal digital assistant, and the like. [0220] Referring to Fig. 11, the apparatus 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816. [0221] The processing component 802 typically controls overall operations of the apparatus 800, such as the operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to perform instructions to perform all or a part of the steps in the above described methods. Moreover, the processing component 802 may include one or more modules which facilitate interaction between the processing component 802 and other components. For example, the processing component 802 may include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802. [0222] The memory 804 is configured to store various types of data to support the operation of the apparatus 800. Examples of such data include instructions for any applications or methods operated on the apparatus 800, contact data, telephone book data, messages, pictures, video and the like. The memory 804 may be implemented using any type of volatile or non-volatile memory devices, or a combination thereof, such as a static random access memory (SRAM), an electrically erasable programmable read-only memory (EEPROM), an erasable programmable read-only memory (EPROM), a programmable read-only memory (PROM), a read-only memory (ROM), a magnetic memory, a flash memory, a magnetic disk or an optical disk. [0223] The power component 806 provides power to various components of the apparatus 800. The power component 806 may include a power management system, one or more power sources, and other components associated with the generation, management, and distribution of power for the apparatus 800. [0224] The multimedia component 808 includes a screen providing an output interface between the apparatus 800 and the user. In some embodiments, the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes the touch panel, the screen may be implemented as a touch screen to receive input signals from the user. The touch panel includes one or more touch sensors to sense touching, sliding, and gestures on the touch panel. The touch sensors may not only sense a boundary of a touching or sliding action, but also sense a period of time and a pressure associated with the touching or sliding operation. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. The front camera and the rear camera may receive external multimedia datum in a case that the apparatus 800 is in an operation mode, such as a photographing mode or a video mode. Each of the front camera and the rear camera may be a fixed optical lens system or have a focus length and optical zoom capability. [0225] The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a microphone ("MIC") configured to receive an external audio signal when the apparatus 800 is in an operation mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may be further stored in the memory 804 or transmitted via the communication component 816. In some embodiments, the audio component 810 further includes a speaker to output audio signals. [0226] The I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module. The above peripheral interface module may be a keyboard, a click wheel, a button, and the like. The button may include, but be not limited to, a home button, a volume button, a starting button, and a locking button. [0227] The sensor component 814 includes one or more sensors configured to provide status assessments for the apparatus 800 in various aspects. For example, the sensor component 814 may detect an open/closed status of the apparatus 800 and positioning of components, for example, the component is the display and the keypad of the apparatus 800. The sensor component 814 may also detect a change in position of the apparatus 800 or a component of the apparatus 800, contact or non-contact of a user with the apparatus 800, an orientation or an acceleration/deceleration of the apparatus 800, and a change in temperature of the apparatus 800. The sensor component 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor component 814 may also include a light sensor, such as a complementary metal oxide semiconductor (CMOS) or charge-coupled device (CCD) image sensor, which is used in imaging applications. In some embodiments, the sensor component 814 may also include an accelerometer sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor. [0228] The communication component 816 is configured to facilitate wired or wireless communication between the apparatus 800 and other devices. The apparatus 800 can access a wireless network based on a communication standard, such as wireless fidelity (WiFi), the 2rd generation (2G), or 3th generation (3G), or a combination thereof. In one exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 816 further includes a near field communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on a radio frequency identification (RFID) technology, an infrared data association (IrDA) technology, an ultra-wideband (UWB) technology, a blue-tooth (BT) technology, and other technologies. [0229] In exemplary embodiments, the apparatus 800 may be implemented with one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), controllers, micro-controllers, microprocessors, or other electronic components, for performing the above described methods. [0230] In exemplary embodiments, a non-transitory computer-readable storage medium including instructions, such as the memory 804 including instructions, is further provided. The above instructions may be executable by the processor 820 in the apparatus 800, for performing the above-described methods. For example, the non-transitory computer-readable storage medium may be a read-only memory (ROM), a read access memory (RAM), a compact disk-read only memory (CD-ROM), a magnetic tape, a floppy disc, an optical data storage device, and the like. [0231] Other embodiments of the disclosure are easily conceivable for those skilled in the art from consideration of the specification and with practice of the disclosure disclosed here. The disclosure is intended to cover any variations, usages or adaptations of the disclosure which conform to the general principles thereof and includes common general knowledge and conventional technical means in the technical field not disclosed in the disclosure. The specification and the embodiments are only considered as exemplary, and the scope of the invention is defined by the following claims. [0232] It should be understood that the present disclosure is not limited to the exact construction described above and illustrated in the accompanying drawings, and various modifications and changes can be made without departing from the scope thereof. The scope of the disclosure is only limited by the appended claims.
权利要求:
Claims (15) [0001] A method for controlling a device, characterized in that, the method comprises: performing (S11) voice recognition on a received sound signal to obtain a voice recognition result; determining (S12) one or more keywords using the voice recognition result; determining (S13) a target intelligent device having attribute information that is matched with the one or more keywords from intelligent devices, wherein relationships between the intelligent devices and attribute information of the intelligent devices are constructed in advance, and the attribute information characterizes an operation provided by the intelligent device corresponding to the attribute information; and controlling (S14) the target intelligent device to perform an operation indicated by the voice recognition result. [0002] The method of claim 1, wherein relationships between the intelligent devices and basic information of the intelligent devices are constructed in advance, and the basic information comprises one or more of a name, an identifier, a type, a position and a characteristic of the intelligent device, and the one or more keywords comprise a first keyword for characterizing an intention and a second keyword for characterizing basic information; and wherein the determining (S13) the target intelligent device having attribute information matched with the one or more keywords from the intelligent devices comprises: matching (S1301) the first keyword for characterizing the intention with attribute information corresponding to each of the intelligent devices; in response to the attribute information of at least two intelligent devices being matched with the first keyword for characterizing the intention, matching (S1302) the second keyword for characterizing basic information with the basic information of each of the at least two intelligent devices, and determining (S1302) the intelligent device having the basic information matched with the second keyword for characterizing basic information as the target intelligent device; and in response to the attribute information of one intelligent device being matched with the first keyword for characterizing the intention, matching (S1303) the second keyword for characterizing basic information with the basic information of the one intelligent device, and determining (S1303) the one intelligent device as the target intelligent device in response to the second keyword for characterizing basic information being matched with the basic information of the one intelligent device. [0003] The method of claim 1, wherein the determining (S13) the target intelligent device having attribute information matched with the one or more keywords from the intelligent devices comprises: determining (S1304) candidate intelligent devices having attribute information matched with the one or more keywords from the intelligent devices; and in response to an intelligent device out of the candidate intelligent devices which has performed an operation existing in a preset reference time period, determining (S1305) the intelligent device as the target intelligent device. [0004] The method of claim 1, further comprising: acquiring historical control data (S15), wherein the historical control data comprises relationships between control instructions determined historically and intelligent devices operating in response to the control instructions, and wherein the control instruction comprises one of: the voice recognition result and at least one keyword comprised in the voice recognition result; and for each of the control instructions, determining (S16), based on the historical control data, an intelligent device having a highest probability that the intelligent device operates in response to the control instructions, and wherein the determining (S13) the target intelligent device having attribute information matched with the keywords from the intelligent devices comprises: determining (S1306) the intelligent device having a highest probability that the intelligent device operates in response to the control instruction that corresponds to the voice recognition result; and determining the determined intelligent device (S1307) as the target intelligent device. [0005] The method of claim 1, wherein the determining (S13) the target intelligent device having attribute information matched with the one or more keywords from the intelligent devices comprises: determining (S1308) candidate intelligent devices having attribute information matched with the one or more keywords from the intelligent devices; for each of the candidate intelligent devices, determining (S1309) a weight of a keyword matched with the attribute information of the candidate intelligent device; and determining (S1310) the candidate intelligent device having attribute information matched with the keyword having the largest weight as the target intelligent device. [0006] The method of claim 1, wherein the determining (S13) the target intelligent device having attribute information matched with the one or more keywords from the intelligent devices comprises: determining (S1311) candidate intelligent devices having attribute information matched with the one or more keywords from the intelligent devices; for each of the candidate intelligent devices, determining (S1312) a number of keywords matched with the attribute information of the candidate intelligent device; and determining (S1313) the candidate intelligent device having attribute information matched with a greatest number of keywords as the target intelligent device. [0007] The method of claim 1, wherein the one or more keywords comprises a first keyword for characterizing an intention and a third keyword for characterizing an application range,the determining (S13) the target intelligent device having attribute information matched with the one or more keywords from the intelligent devices comprises: determining (S1314) a target intelligent device having attribute information matched with the first keyword for characterizing the intention from the intelligent devices, and wherein the controlling (S14) the target intelligent device to perform the operation indicated by the voice recognition result comprises:controlling (S1401) the target intelligent device to perform an operation indicated by the voice recognition result within the application range. [0008] An apparatus for controlling a device, characterized in that, the apparatus comprises: a recognition result acquiring module (21), configured to perform voice recognition on a received sound signal to obtain a voice recognition result; a keyword determining module (22), configured to determine one or more keywords using the voice recognition result; a first determining module (23), configured to determine a target intelligent device having attribute information matched with the one or more keywords from intelligent devices, wherein relationships between the intelligent devices and attribute information of the intelligent devices are constructed in advance, and the attribute information characterizes an operation provided by the intelligent device corresponding to the attribute information; and a control module (24), configured to control the target intelligent device to perform an operation indicated by the voice recognition result. [0009] The apparatus of claim 8, wherein relationships between the intelligent devices and basic information of the intelligent devices are constructed in advance, the basic information comprises one or more of a name, an identifier, a type, a position and a characteristic of the intelligent device, and the one or more keywords comprise a first keyword for characterizing an intention and a second keyword for characterizing basic information, andthe first determining module (23) comprises: a first information matching sub-module (2301), configured to match the first keyword for characterizing the intention with attribute information corresponding to each of the intelligent devices; a first determining sub-module (2302), configured to, in response to the attribute information of at least two intelligent devices being matched with the first keyword for characterizing the intention, match the second keyword for characterizing basic information with the basic information of each of the at least two intelligent devices, and determine the intelligent device having the basic information matched with the second keyword for characterizing basic information as the target intelligent device; and a second determining sub-module (2303), configured to, in response to the attribute information of one intelligent device being matched with the first keyword for characterizing the intention, match the second keyword for characterizing basic information with the basic information of the one intelligent device, and determine the one intelligent device as the target intelligent device in response to the second keyword for characterizing basic information being matched with the basic information of the one intelligent device. [0010] The apparatus of claim 8, wherein the first determining module comprises: a third determining sub-module (2304), configured to determine candidate intelligent devices having attribute information matched with the one or more keywords from the intelligent devices; and a fourth determining sub-module (2305), configured to, in response to an intelligent device out of the candidate intelligent devices which has performed an operation existing in a preset reference time period, determine the intelligent device as the target intelligent device. [0011] The apparatus of claim 8, wherein the apparatus further comprises: a data acquiring module (25), configured to acquire historical control data, wherein the historical control data comprises relationships between control instructions determined historically and intelligent devices operating in response to the control instructions, and wherein the control instruction comprises one of: the voice recognition result and at least one keyword comprised in the voice recognition result; and a second determining module (26), configured to, for each of the control instructions, determine, based on the historical control data, an intelligent device having a highest probability that the intelligent device operates in response to the control instruction, and wherein the first determining module (23) comprises: a fifth determining sub-module (2306), configured to determine the intelligent device having the highest probability that the intelligent device operates in response to the control instruction that corresponds to the voice recognition result; and a sixth determining sub-module (2307), configured to determine the determined intelligent device as the target intelligent device. [0012] The apparatus of claim 8, wherein the first determining module (23) comprises: a seventh determining sub-module (2308), configured to determine candidate intelligent devices having attribute information matched with the one or more keywords from the intelligent devices; an eighth determining sub-module (2309), configured to, for each of the candidate intelligent devices, determine a weight of a keyword matched with the attribute information of the candidate intelligent device; and a ninth determining sub-module (2310), configured to determine the candidate intelligent device having attribute information matched with the keyword having the largest weight as the target intelligent device. [0013] The apparatus of claim 8, wherein the first determining module (23) comprises: a tenth determining sub-module (2311), configured to determine candidate intelligent devices having attribute information matched with the one or more keywords from the intelligent devices; an eleventh determining sub-module (2312), configured to, for each of the candidate intelligent devices, determine a number of keywords matched with the attribute information of the candidate intelligent device; and a twelfth determining sub-module (2313), configured to determine the candidate intelligent device having attribute information matched with a greatest number of keywords as the target intelligent device, or wherein the one or more keywords comprises a keyword for characterizing an intention and a third keyword for characterizing an application range, the first determining module (23) comprises: a thirteenth determination sub-module (2314), configured to determine a target intelligent device having attribute information matched with the first keyword for characterizing the intention from the intelligent devices, and wherein the control module (24) comprises:a control sub-module (2401), configured to control the target intelligent device to perform an operation indicated by the voice recognition result within the application range. [0014] An apparatus for controlling a device, comprising: a processor (802); and a memory (804) configured to store processor-executable instructions, wherein the processor is configured to: perform voice recognition on a received sound signal to obtain a voice recognition result; determine one or more keywords using the voice recognition result; determine a target intelligent device having attribute information matched with the one or more keywords from intelligent devices, wherein relationships between the intelligent devices and attribute information of the intelligent devices are constructed in advance, and the attribute information characterizes an operation provided by the intelligent device corresponding to the attribute information; and control the target intelligent device to perform an operation indicated by the voice recognition result. [0015] A recording medium readable by a computer and having recorded thereon a computer program including instructions for executing the steps of a method for controlling a device according to any one of claims 1 to 7.
类似技术:
公开号 | 公开日 | 专利标题 EP3016084B1|2018-04-04|Methods and devices for mode switch KR101846752B1|2018-04-06|Method and device for turning on air conditioner KR101876655B1|2018-07-09|Screen control method and device RU2669575C2|2018-10-12|Electronic device controlling method and device and terminal US10908772B2|2021-02-02|Method and apparatus for adjusting running state of smart housing device KR101736318B1|2017-05-16|Method, apparatus, program, and recording medium for controlling smart device KR101767203B1|2017-08-10|Method, apparatus, control device, smart device, program and computer-readable recording medium for controlling voice of smart device KR101649596B1|2016-08-19|Method, apparatus, program, and recording medium for skin color adjustment US20170126192A1|2017-05-04|Method, device, and computer-readable medium for adjusting volume US20170034430A1|2017-02-02|Video recording method and device CN104614998B|2018-07-31|The method and apparatus for controlling home equipment WO2017201860A1|2017-11-30|Video live streaming method and device EP3282347A1|2018-02-14|Terminal and touch response method and device WO2015143875A1|2015-10-01|Method for presenting content, method for pushing content presentation mode and intelligent terminal EP3131315A1|2017-02-15|Working method and working device of intelligent electric apparatus JP2016524371A|2016-08-12|Smart terminal control method, apparatus, program, and storage medium CN105607805B|2019-05-07|The footmark processing method and processing device of application icon JP2016524772A|2016-08-18|Authority management method, apparatus, system, and recording medium JP2016027688A|2016-02-18|Equipment control method and electric equipment EP3171548B1|2019-09-11|A method and apparatus for controlling a home device US10175671B2|2019-01-08|Method and apparatus for controlling intelligent device EP3109185B1|2018-02-07|Method and device for prompting change of garbage bag EP3144915A1|2017-03-22|Method and apparatus for controlling device, and terminal device CN104540184B|2018-09-25|Equipment networking method and device US20170150290A1|2017-05-25|Method and device for information push
同族专利:
公开号 | 公开日 CN111508483A|2020-08-07| US20200251101A1|2020-08-06|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
法律状态:
2020-07-03| STAA| Information on the status of an ep patent application or granted ep patent|Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED | 2020-07-03| PUAI| Public reference made under article 153(3) epc to a published international application that has entered the european phase|Free format text: ORIGINAL CODE: 0009012 | 2020-08-05| AK| Designated contracting states|Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR | 2020-08-05| AX| Request for extension of the european patent|Extension state: BA ME | 2021-01-29| STAA| Information on the status of an ep patent application or granted ep patent|Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE | 2021-03-03| RBV| Designated contracting states (corrected)|Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR | 2021-03-03| 17P| Request for examination filed|Effective date: 20210125 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|