专利摘要:
NAVIGATION OF UNPREVISIBLE VEHICLE. The present invention relates to a system and method for controlling the movement of a vehicle (200). A current state (238) of the vehicle (200) is identified. The current state (238) comprises a current location of the vehicle (200). A next state (240) for the vehicle (200) is selected by a processing unit (1104). The next state (240) comprises a location close to the vehicle (200). A value for an attribute of the next state (240) of the vehicle (200) is selected at random. The movement of the vehicle (200) is controlled to move the vehicle (200) from the current state (238) to the next state (240).
公开号:BR102013016268B1
申请号:R102013016268-0
申请日:2013-06-24
公开日:2021-01-12
发明作者:Kimberly Mach;Robert P. Lutter;Trevor Lee Olson
申请人:The Boeing Company;
IPC主号:
专利说明:

[0001] [001] The present invention relates, in general, to systems and methods for controlling the movement of a vehicle. More particularly, the present description refers to the automatic control of the movement of a vehicle in an unpredictable manner.
[0002] [002] Manned and unmanned aerial vehicles can be used to perform a variety of military and other missions. For example, one type of mission that can be performed by an aerial vehicle is a surveillance mission. In that case, the air vehicle can be flown from a base location to a location close to a point of interest. The air vehicle can then be piloted in an area close to the point of interest while surveillance equipment on the air vehicle is prepared at the point of interest. Surveillance equipment may include, for example, air vehicle imaging systems and other on-board systems to obtain information about the point of interest.
[0003] [003] The point of interest that is the target of a surveillance mission or other mission by an air vehicle can be immobile or mobile. For example, the point of interest can be a construction, fixed weapon system or other industrial, military still, or other object of interest. Alternatively, the point of interest can be a moving individual, a vehicle, a train of vehicles, or another moving object of interest.
[0004] [004] Various restrictions can limit the way in which an aerial vehicle can be controlled to carry out a mission in an area close to a point of interest. These restrictions can be imposed, for example, by the capabilities of the air vehicle, by the capabilities of equipment in the air vehicle that is used to perform the mission, by weather conditions or other environmental conditions, by particular details of the mission to be performed, or by other factors or combinations of factors.
[0005] [005] The performance capabilities of an air vehicle can limit the altitude and speed at which the air vehicle can be flown in an area close to the point of interest during a mission. The capabilities of surveillance equipment or other equipment embedded in the air vehicle can dictate the maximum distance from the point of interest at which the air vehicle can be flown, while carrying out a surveillance mission or another mission.
[0006] [006] Topography, climate, or other environmental conditions can limit the area close to a point of interest that can be used by an aerial vehicle to carry out a mission. For example, a storm, enemy radar deployments, or other conditions or combinations of conditions can create “no-fly zones” close to the point of interest through which an air vehicle should not be piloted. Time in season limitations, surveillance image quality requirements, or other particular requirements of a surveillance mission or other mission can further restrict how an air vehicle is piloted in an area close to a point of interest during the mission.
[0007] [007] It may be desirable that an air vehicle is not detected by hostile forces as the air vehicle performs a mission in an area close to a point of interest. The detection of an air vehicle that performs a next mission from the point of interest can allow hostile forces to take steps to prevent the mission. For example, in response to the detection of the presence of an air vehicle that flies near the point of interest, objects of interest can be moved or hidden by hostile forces in an attempt to prevent a successful completion of a surveillance mission by the air vehicle.
[0008] [008] The detection of the presence of an air vehicle that performs a mission close to a point of interest can lead to an attack on the air vehicle by hostile forces. Such an attack can force the air vehicle to abandon the mission. In the worst case, such an attack can lead to the loss of the air vehicle.
[0009] [009] Current systems and methods for controlling the flight of an air vehicle may provide little, if any, assistance to a pilot who controls the movement of the air vehicle in a way that satisfies the various restrictions for carrying out a mission in an area near a point of interest. Currently, automated navigation systems can be used to control the flight of an air vehicle from one location to another location. In the context of conducting a mission in an area close to a point of interest, an automated navigation system can be used to automatically control the flight of the air vehicle from a base location to a location close to the point of interest. However, current automated navigation systems are not adapted to control the movement of an air vehicle in an area close to a point of interest to carry out a surveillance mission or another mission in the area. In particular, current systems and methods for controlling the movement of an air vehicle in an area close to a point of interest may not reduce the likelihood that the air vehicle will be detected by hostile forces while carrying out a mission in the area.
[0010] [0010] Accordingly, it may be beneficial to have a method and apparatus that take into account one or more of the issues discussed above as well as possibly other issues. SUMMARY
[0011] [0011] An illustrative embodiment of the present description provides a method for controlling the movement of a vehicle. A current vehicle status is identified. The current state comprises a current vehicle location. A next state for the vehicle is selected by a processing unit. The next state comprises a close location for the vehicle. A value for an attribute of the next vehicle state is selected at random. Vehicle movement is controlled to move the vehicle from the current state to the next state.
[0012] [0012] Another illustrative modality provides an apparatus that comprises a processing unit configured to identify a current state of a vehicle, select a next state for the vehicle and randomly select a value for an attribute of the next state of the vehicle. The current state comprises a current vehicle location. The next state comprises a close location for the vehicle.
[0013] [0013] Another illustrative modality provides another method for controlling the movement of a vehicle. A current vehicle status is identified. The current state comprises a current vehicle location. The next possible states for the vehicle are identified by a processing unit. The state values for the next possible states are identified by the processing unit. The state values are a function of how many times the vehicle was in the next possible states. A next state for the vehicle is selected from the next possible states using the state values. The next state comprises a close location for the vehicle.
[0014] [0014] In the figures and the text, in one aspect, a method is described for controlling movement of a vehicle 200, which includes identifying a current state 238 of vehicle 200, where current state 238 comprises a current location of the vehicle 200; selecting, by a processing unit 1104, a next state 240 for vehicle 200, where next state 240 comprises a location close to vehicle 200; randomly selecting a value for an attribute of the next state 240 of vehicle 200; and controlling the movement of the vehicle 200 to move the vehicle 200 from the current state 238 to the next state 240.
[0015] [0015] Alternatively, the method may include that the next state attribute 240 of vehicle 200 is selected from a next location for vehicle 200, altitude 306 of vehicle 200, speed 308 of vehicle 200 and orientation 311 of the vehicle 200.
[0016] [0016] Alternatively, the method may include that selecting the next state 240 for vehicle 200 includes identifying next possible states for vehicle 200; identify state values 253 for the next possible states; and selecting the next state 240 from the next possible states using the state values 253.
[0017] [0017] Alternatively, the method may include that the identification of state values 253 for the next possible states comprises identifying the times that the vehicle 200 was in the next possible states.
[0018] [0018] Alternatively, the method may include that selecting the next state 240 includes randomly selecting the next state 240 from several of the next possible states that have the best state values 253.
[0019] [0019] Alternatively, the method may additionally include changing one of the state values 253 to one of the next possible states selected to be the next state 240 in response to the selection of the next state 240.
[0020] [0020] Alternatively, the method may additionally include that vehicle 200 is selected from an unmanned vehicle 214, a manned vehicle 212, an air vehicle 206, a land vehicle 208, a marine vehicle 210 and a submarine vehicle 211.
[0021] [0021] In one aspect, an apparatus is described that includes a processing unit 1104 configured to identify a current state 238 of a vehicle 200, wherein the current state 238 includes a current location of the vehicle 200; selecting a next state 240 for vehicle 200, wherein next state 240 comprises a location close to vehicle 200; and randomly selecting a value for an attribute of the next state 240 of vehicle 200.
[0022] [0022] Alternatively, the apparatus may include that the attribute of the next state 240 of the vehicle 200 is selected from the next location for the vehicle 200, an altitude 306 of the vehicle 200, a speed 308 of the vehicle 200 and a orientation of the vehicle 200 .
[0023] [0023] Alternatively, the apparatus may include that processing unit 1104 is additionally configured to: identify next possible states for vehicle 200; identify state values 253 for the next possible states; and selecting the next state 240 from the next possible states using the state values 253.
[0024] [0024] Alternatively, the apparatus may include that processing unit 1104 is configured to identify state values 253 for the next possible states by identifying at times when the vehicle 200 was in the next possible states.
[0025] [0025] Alternatively, the apparatus may include that processing unit 1104 is configured to randomly select the next state 240 from several of the next possible states that have the best state values 253.
[0026] [0026] Alternatively, the apparatus may include that the processing unit 1104 is additionally configured to change one of the state values 253 to one of the next possible states selected to be the next state 240 in response to the selection of the next state 240.
[0027] [0027] Alternatively, the apparatus may include that vehicle 200 is selected from an unmanned vehicle 214, a manned vehicle 212, an air vehicle 206, a land vehicle 208, a marine vehicle 210 and a submarine vehicle 211.
[0028] [0028] In one aspect, a method is described for controlling movement of a vehicle 200, which includes identifying a current state 238 of vehicle 200, wherein current state 238 comprises a current location of vehicle 200; identify, by a processing unit 1104, next possible states for the vehicle 200; identify, by the processing unit 1104, state values 253 for the next possible states, where the state values 253 are a function of times when the vehicle 200 was in the next possible states; and selecting a next state 240 for vehicle 200 from the next possible states using state values 253, where next state 240 comprises a close location for vehicle 200.
[0029] [0029] Alternatively, the method may additionally include randomly selecting a value for an attribute of the next state 240 of vehicle 200, where the attribute is selected from the next location for vehicle 200, an altitude 306 of vehicle 200, a speed 308 of vehicle 200 and an orientation 311 of vehicle 200.
[0030] [0030] Alternatively, the method may additionally include automatically controlling the movement of vehicle 200 to move vehicle 200 from current state 238 to the next state 240.
[0031] [0031] Alternatively, the method may additionally include that selecting the next state 240 for vehicle 200 includes randomly selecting the next state 240 from several of the next possible states that have better state values 253.
[0032] [0032] Alternatively, the method may additionally include changing one of the state values 253 to one of the next possible states selected to be the next state 240 in response to the selection of the next state 240.
[0033] [0033] Alternatively, the method may additionally include that vehicle 200 is selected from an unmanned vehicle 214, a manned vehicle 212, an air vehicle 206, a land vehicle 208, a marine vehicle 210 and an underwater vehicle 211.
[0034] [0034] The characteristics, functions and benefits can be achieved independently in several modalities of the present description or can be combined in still other modalities in which additional details can be observed with reference to the description and drawings below. BRIEF DESCRIPTION OF THE DRAWINGS
[0035] [0035] The innovative features which are believed to be a characteristic of the illustrative modalities are set out in the attached claims. The illustrative modalities, however, as well as a preferred mode of use, objectives and additional characteristics of them will be better understood by reference to the detailed description below of illustrative modalities of the present description when read together with the attached drawings, in which:
[0036] [0036] Figure 1 is an illustration of the unpredictable navigation of an unmanned aerial vehicle, in accordance with an illustrative modality;
[0037] [0037] Figure 2 is an illustration of a block diagram of a vehicle and an automatic navigation system, in accordance with an illustrative modality;
[0038] [0038] Figure 3 is an illustration of a block diagram of a vehicle state, in accordance with an illustrative embodiment;
[0039] [0039] Figure 4 is an illustration of several of the possible states for a vehicle, in accordance with an illustrative modality;
[0040] [0040] Figure 5 is an illustration of a function for identifying the value of a vehicle's state, in accordance with an illustrative modality;
[0041] [0041] Figure 6 is an illustration of a flowchart of a process for controlling the condition of a vehicle, in accordance with an illustrative modality;
[0042] [0042] Figure 7 is an illustration of a flowchart of a process for selecting the next state of a vehicle, in accordance with an illustrative modality;
[0043] [0043] Figure 8 is an illustration of a distribution of selected altitude values for the state of a vehicle, in accordance with an illustrative modality;
[0044] [0044] Figure 9 is an illustration of a distribution of selected speed values for the state of a vehicle, in accordance with an illustrative modality;
[0045] [0045] Figure 10 is an illustration of a simulated flight path for an aerial vehicle, in accordance with an illustrative modality; and
[0046] [0046] Figure 11 is an illustration of a block diagram of a data processing system, in accordance with an illustrative modality. DETAILED DESCRIPTION
[0047] [0047] The different illustrative modalities recognize and take into account several different considerations. “Miscellaneous” as used in this document with reference to items means one or more items. For example, “several different considerations” means one or more different considerations.
[0048] [0048] The different illustrative modalities recognize and take into account that an air vehicle can be moved from a base location to a location close to a point of interest to carry out a surveillance mission. The air vehicle can be placed on a static flight path around the point of interest after the vehicle reaches a location where surveillance sensors on the air vehicle can see the point of interest. The air vehicle can repeat the same flight path around the point of interest until sufficient information has been collected. The different illustrative modalities recognize and take into account that an air vehicle that is piloted in a repeated pattern around a point of interest is more likely to be detected by hostile forces. The use of a repeated flight path around a point of interest has resulted in the loss of air vehicles for hostile action in the past.
[0049] [0049] The different illustrative modalities still recognize and take into account that a current procedure for reducing the likelihood that an air vehicle flying in an area close to a point of interest will be detected by hostile forces is the pilot of the air vehicle to vary manually the flight path of the vehicle close to the point of interest. However, flying an air vehicle on a flight path that varies continuously and does not repeat creates an operational load on the pilot and increases pilot fatigue. As a result, it may be difficult for the pilot to maintain a flight path for the air vehicle that does not pass repeatedly through or near the same point several times during the performance of a surveillance mission or another mission in an area close to the point of interest. .
[0050] [0050] The different illustrative modalities still recognize and take into account that a pilot may still have to manually adjust the flight path of an air vehicle close to a point of interest in order to avoid exclusion zones or to satisfy other restrictions. Manually adjusting an air vehicle's flight path to satisfy such restrictions can further increase pilot fatigue.
[0051] [0051] The different illustrative modalities still recognize and take into account that current systems and methods for controlling the flight of an aerial vehicle with respect to a point of interest can assume that the point of interest is immobile. Current systems and methods for controlling the flight of an air vehicle with respect to a point of interest may not be adapted to adjust the flight path of the air vehicle in response to the movement of the point of interest during the performance of a surveillance mission or another mission at the point of interest.
[0052] [0052] The different illustrative modalities provide a system and method for controlling the movement of a vehicle, such as an aerial vehicle, in an area close to a point of interest in a way that reduces the likelihood that the vehicle can be detected hostile forces while still satisfying various mission restrictions. According to an illustrative modality, a vehicle can be controlled to move in an unpredictable pattern in an area close to the point of interest. An algorithm can be employed to automatically control the vehicle's movement in a way that minimizes the number of times the vehicle is in the same state during the performance of a mission in the area near the point of interest. For example, an air vehicle that is controlled to fly an unpredictable flight path can mimic natural objects, thereby enabling the air vehicle to hide in natural interference. According to an illustrative modality, the movement of the vehicle in the area near the point of interest can be automatically adjusted in response to the movement of the point of interest during the performance of a mission by the vehicle at the point of interest.
[0053] [0053] Returning, firstly, to Figure 1, an unpredictable navigation illustration of an unmanned aerial vehicle is represented, in accordance with an illustrative modality. In this illustrative example, the unmanned aerial vehicle 100 is performing a surveillance mission or other mission with respect to point of interest 102. Point of interest 102 can be immobile or mobile.
[0054] [0054] The movement of unmanned aerial vehicle 100 can be controlled by operator 104. Operator 104 can be a human pilot or other operator at remote location 106. Remote location 106 can be any location that is not on the unmanned aerial vehicle 100. Operator 104 may use remote control system 108 at remote location 106 to control the movement of the unmanned aerial vehicle 100. Remote control system 108 may be in communication with the unmanned aerial vehicle 100 via a communications link. 109. Communications link 109 may include any communications link appropriate to provide signals between remote control system 108 and unmanned aerial vehicle 100 to control unmanned aerial vehicle 100. For example, without limitation, the control system remote 108 can be in communication with the unmanned aerial vehicle 100 through communications link 109 that includes satellite 110, or that uses any uer another appropriate communications link.
[0055] [0055] According to an illustrative modality, the operator 104 can engage and disengage an automatic navigation system to control the movement of the unmanned aerial vehicle 100 in an area close to the point of interest 102. When the automatic navigation system is engaged , the unmanned aerial vehicle 100 can be automatically controlled to move along an unpredictable flight path in the area near the point of interest 102. When the automatic navigation system is disengaged, operator 104 can manually control the movement of the vehicle unmanned aerial 100 via remote control system 108.
[0056] [0056] According to an illustrative modality, the unmanned aerial vehicle 100 can be controlled to follow flight path 114 while performing a mission in the area near point of interest 102. Flight path 114 can be non-repetitive and unpredictable . For example, flight path 114 can be determined by the automatic navigation system in such a way that the number of times that unmanned aerial vehicle 100 passes through the same point or otherwise has the same state of motion, during the course of carrying out a mission at point of interest 102 is minimized. Unmanned aerial vehicle 100 that follows flight path 114 is less likely to be detected by hostile forces.
[0057] [0057] Flight path 114 for unmanned aerial vehicle 100 may further satisfy several restrictions that may be imposed on the mission that is performed by unmanned aerial vehicle 100 in the area near point of interest 102. For example, such a restriction may require that the unmanned aerial vehicle 100 is not further from the point of interest 102 than the distance indicated by the dashed line 116 and that the unmanned aerial vehicle 100 is not closer to the point of interest 102 than the distance indicated by the line dashed 118 while the mission is performed by unmanned aerial vehicle 100. Accordingly, in this example, flight path 114 for unmanned aerial vehicle 100 can be automatically limited to the area in the range from point of interest 102 between dashed lines 116 and 118.
[0058] [0058] In this example, the no-fly zone 120 has been identified in the area near the point of interest 102. For example, without limitation, the no-fly zone 120 may be an area under surveillance by enemy radar systems, an area through an international border, or any other area through which the unmanned aerial vehicle 100 must not pass during the performance of the mission at the point of interest 102. According to an illustrative modality, flight path 114 for the unmanned aerial vehicle 100 can be restricted automatically to avoid the no-fly zone 120.
[0059] [0059] The various restrictions that can be used to determine flight path 114 for unmanned aerial vehicle 100 can be identified before unmanned aerial vehicle 100 is deployed to the area near point of interest 102. The various restrictions that can be used to determine flight path 114 for unmanned aerial vehicle 100 can be further established or changed during the time that unmanned aerial vehicle 100 is at the station carrying out the mission at point of interest 102. According to one modality As an illustration, flight path 114 for unmanned aerial vehicle 100 can be automatically adjusted to satisfy any such restrictions that may be established or changed during the time that unmanned aerial vehicle 100 is at the station performing a mission in the area near point of interest 102.
[0060] [0060] For example, such restrictions can be set or changed by operator 104 when unmanned aerial vehicle 100 is at the station near the point of interest 102. For example, without limitation, operator 104 can establish aerial exclusion zone 120 , change the length of the no-fly zone 120, or delete the no-fly zone 120 while the unmanned aerial vehicle 100 is in flight close to the point of interest 102.
[0061] [0061] As another example, such restrictions may change automatically during the time that unmanned aerial vehicle 100 is at the station performing the mission at point of interest 102. For example, without limitation, aerial exclusion zone 120 may change automatically with the time. As an example, without limitation, the no-fly zone 120 may be associated with a moving weather condition, a moving vehicle, or some other moving or otherwise changing object or condition. In that case, the no-fly zone 120 may move automatically or otherwise change the shape or size, or both, depending on the condition or object associated with the no-fly zone 120 moves, or otherwise change during the time when the unmanned aerial vehicle 100 is conducting a mission in the area near point of interest 102.
[0062] [0062] As will be described in more detail below, in accordance with an illustrative modality, flight path 114 can be determined automatically by identifying a current state of the unmanned aerial vehicle 100 and then automatically selecting a next state for the aerial vehicle. unmanned 100 that can better satisfy the various criteria. The current state of unmanned aerial vehicle 100 may include a current location of unmanned aerial vehicle 100. The next state for unmanned aerial vehicle 100 may include a next location for unmanned aerial vehicle 100. Unmanned aerial vehicle 100 it can then be controlled to move from the first state to the second state.
[0063] [0063] The criteria that are used to select the next state for unmanned aerial vehicle 100 may include, for example, minimizing the probability that unmanned aerial vehicle 100 is in the same state repeatedly during the performance of a mission at the interest 102. Meeting the criteria results in flight path 114 for unmanned aerial vehicle 100, which is non-repetitive and therefore unpredictable. An unpredictable flight path reduces the likelihood that unmanned aerial vehicle 100 will be detected by hostile forces during the performance of a mission at the point close to the area of interest 102.
[0064] [0064] Other criteria that can be used to select the next state for unmanned aerial vehicle 100 may include criteria related to various mission restrictions. Such other criteria may prevent the next state for unmanned aerial vehicle 100 from being selected as any state that may violate such restrictions.
[0065] [0065] In the example shown, unmanned aerial vehicle 100 is in a current state at point 122. The next state for unmanned aerial vehicle 100 is selected to be at point 124. Unmanned aerial vehicle 100 can be controlled automatically to move from point 122 to point 124. When unmanned aerial vehicle 100 moves to point 124, the state of unmanned aerial vehicle 100 at point 124 becomes the current state. The next state for unmanned aerial vehicle 100 can then be selected to be at point 126, and so on, to form flight path 114 for unmanned aerial vehicle 100. Flight path 114 does not pass repeatedly through or close to the same point and is therefore unpredictable. Flight path 114 also satisfies the restrictions of remaining in the area between dashed lines 116 and 118 and not entering the no-fly zone 120.
[0066] [0066] Now returning to Figure 2, an illustration of a block diagram of a vehicle and automatic navigation system are represented, in accordance with an illustrative modality. Vehicle 200 can be any vehicle for carrying out a mission in an area close to the point of interest 202. According to an illustrative modality, the automatic navigation system 204 is configured to automatically move vehicle 200 in the area close to the point of interest 202 in an unpredictable manner that reduces the likelihood that vehicle 200 can be detected by hostile forces while still satisfying any of the restrictions to successfully complete the mission at point of interest 202.
[0067] [0067] Vehicle 200 can be air vehicle 206, land vehicle 208, marine vehicle 210, or submarine vehicle 211. Air vehicle 206 can be any vehicle configured to travel through the air. For example, aerial vehicle 206 may include a fixed wing, rotary wing, or lighter than an aerial aircraft. Unmanned aerial vehicle 100 in Figure 1 is an example of an implantation of aerial vehicle 206. Ground vehicle 208 can include any vehicle configured to travel on land. Marine vehicle 210 may include any vehicle configured to travel on the surface of the water. Submarine vehicle 211 can include any vehicle configured to travel under the surface of the water.
[0068] [0068] Vehicle 200 can be any vehicle configured to travel by any means or in various combinations of means. For example, without limitation, vehicle 200 may be an amphibious vehicle configured to travel both on land and in water. As another example, vehicle 200 may be an aerospace vehicle configured to travel through air and space.
[0069] [0069] Vehicle 200 can also be manned vehicle 212 or unmanned vehicle 214. Manned vehicle 212 can include any vehicle that is controlled by a human operator located in vehicle 200. Unmanned vehicle 214 can include any vehicle that it is controlled by an operator who is not located in vehicle 200. The unmanned aerial vehicle 100 in Figure 1 is an example of an implantation of the unmanned vehicle 214.
[0070] [0070] Vehicle 200 may include multiple systems for operating vehicle 200 to perform a mission in the area near point of interest 202. For example, without limitation, vehicle 200 may include propulsion systems 216, control systems 218, systems communication systems 220 and mission systems 222.
[0071] [0071] The propulsion systems 216 may include any of the systems configured for the mobile vehicle 200. The propulsion systems 216 may be appropriate for the type of vehicle 200 and the medium in which or through which the vehicle 200 is to be moved . For example, without limitation, for air vehicle 206, propulsion systems 216 may include propellers, jet engines, or other systems suitable for mobile vehicle 200 through the air.
[0072] [0072] The control systems 218 can include any of the systems for controlling the movement of the vehicle 200. For example, without limitation, the control systems 218 can be configured to control the direction of movement of the vehicle 200, the speed of movement of vehicle 200, the orientation of vehicle 200, or other characteristics or combinations of movement characteristics of vehicle 200. Control systems 218 may be appropriate for vehicle type 200 and the various modes in which vehicle movement 200 can be controlled. For example, without limitation, for air vehicle 206, the control systems 218 may include various mobile flight control surfaces such as rudders, flaps and ailerons.
[0073] [0073] In some cases, propulsion systems 216 and control systems 218 can be combined, in whole or in part. For example, without limitation, for a helicopter or other rotary-wing aerial vehicle, a main rotor can be used to propel the vehicle through the air and can be controlled to control the movement of the vehicle through the air. In this example, the main rotor is part of both propulsion systems 216 and control systems 218 for the rotary-wing air vehicle.
[0074] [0074] Communications systems 220 may include any of the systems configured to provide voice, data, or both communication to and from vehicle 200. Communication systems 220 may include systems configured to provide communications between vehicle 200 and any other vehicle or location. For example, without limitation, communication systems 220 may include systems for providing communications to vehicle 200 via satellite communication links or using any of the other communications protocols and means or any combination of communications devices. , means, and protocols.
[0075] [0075] Mission systems 222 can include a number of various systems in vehicle 200 that can be used to perform a mission at point of interest 202. For example, mission systems 222 can include surveillance systems 224, weapon systems 226, other mission systems 228, or various combinations of systems for carrying out various missions in an area close to point of interest 202.
[0076] [0076] The surveillance systems 224 can include any of the systems in the vehicle 200 for carrying out surveillance, intelligence or reconnaissance missions. For example, without limitation, surveillance systems 224 may include imaging systems that can be operated at any frequency to obtain images at the point of interest 202. As another example, surveillance systems 224 may include radar systems, surveillance systems sonar, or passive systems to stop radio frequency detection or other signals at the point of interest 202.
[0077] [0077] Weapon systems 226 can include any of the systems that can be deployed or operated from vehicle 200 to inflict damage at point of interest 202. Other mission systems 228 may include other systems that may be particularly useful for certain missions at point of interest 202. For example, without limitation, for a search and rescue mission, other mission systems 228 may include communication systems 220 for maintaining radio communications with a downed crew or other group at point 202 until that additional help can arrive.
[0078] [0078] The particular mission systems 222 that may be in vehicle 200 may depend on the nature and requirements of the particular mission to be performed. The mission to be performed by vehicle 200 can have multiple objectives. In that case, multiple different types of mission systems 222 can be provided in vehicle 200 for the mission. For example, without limitation, the mission to be performed by vehicle 200 may be to obtain intelligence at point of interest 202 and, depending on intelligence, to inflict damage at point of interest 202. In this case, mission systems 222 on the vehicle 200 can include both 224 surveillance systems and 226 weapon systems.
[0079] [0079] The operation of vehicle 200 to perform a mission at a location close to point of interest 202 can be controlled manually by operator 230. For example, operator 230 can manually control propulsion systems 216 and control systems 218 to control the movement of vehicle 200 to and around the area near point of interest 202.
[0080] [0080] The 222 mission systems can operate automatically or in combination with the 230 operator. The 230 operator can manually control the operation of the 222 mission systems. For example, without limitation, the 230 operator can manually activate and deactivate the mission systems. surveillance 224, deploy weapon systems 226, control other mission systems 228, or control various combinations of mission systems 222 in the course of conducting a mission in the area near point of interest 202.
[0081] [0081] In the case where vehicle 200 is unmanned vehicle 214, operator 230 can control the operation of vehicle 200 from remote control system 232. Remote control system 232 can be located in any location other than either in vehicle 200. Remote control system 232 can be in communication with vehicle 200 via communication systems 220. Control operations performed by operator 230 in remote control system 232 can generate appropriate control signals for controlling the vehicle 200 operation. These control signals can be provided from remote control system 232 to propulsion systems 216, control systems 218 and mission systems 222 on vehicle 200, as appropriate, through communications systems 220 .
[0082] [0082] In the case where vehicle 200 is manned vehicle 212, operator 230 can control the operation of vehicle 200 from onboard vehicle 200. In that case, vehicle 200 can include the appropriate control interfaces onboard vehicle 200 for operator 230 to control propulsion systems 216, control systems 218 and mission systems 222 in a manner appropriate for carrying out a mission in the area near point of interest 202.
[0083] [0083] According to an illustrative modality, the automatic navigation system 204 can be configured to automatically control the movement of the vehicle 200 in an area close to the point of interest 202 to carry out a desired mission. The automatic navigation system 204 can be configured to control vehicle 200 to move in an unpredictable manner, so that the movement of vehicle 200 in the area near point of interest 202 is less likely to be detected by hostile forces. The automatic navigation system 204 can be further configured to control the movement of the vehicle 200 in an area close to the point of interest 202 in a way that satisfies various restrictions associated with the mission being performed.
[0084] [0084] According to an illustrative modality, operator 230 can engage automatic navigation 234 or disengage automatic navigation 236. For example, without limitation, the signs indicating the decision of operator 230 to engage automatic navigation 234 or to disengage navigation automatic 236 can be provided for automatic navigation system 204 from remote control system 232 via communication systems 220 in vehicle 200. In response to receiving a signal to engage automatic navigation 234, the automatic navigation system 204 starts to automatically control the movement of vehicle 200 in the area close to the point of interest 202. In response to receiving a signal to disengage automatic navigation 236, the automatic navigation system 204 may stop automatically controlling the movement of vehicle 200. Thus, in response to receiving the signal to disengage automatic navigation 236, control of vehicle movement the 200 can be returned from automatic control to manual control by operator 230.
[0085] [0085] According to an illustrative modality, the automatic navigation system 204 can control the movement of vehicle 200 by identifying the current state 238 of vehicle 200, selecting the next state 240 of vehicle 200 using an algorithm appropriate, and then automatically controlling the movement of vehicle 200 from current state 238 to next state 240. Current state 238 and next state 240 are examples of states 241 of vehicle 200. States 241 of vehicle 200 are defined by values of various attributes that describe vehicle 200 as vehicle 200 moves in the area near point of interest 202 to perform a mission. For example, without limitation, states 241 may include locations of vehicle 200 in the area near point of interest 202. Other attributes that may define states 241 of vehicle 200 may include, without limitation, vehicle speeds 200, vehicle altitudes 200, vehicle guidelines 200, or other attributes or various combinations of such attributes. In particular, attributes of vehicle 200 that can be related to the detectability of vehicle 200 as vehicle 200 moves in the area near point of interest 202 can be used to define states 241 of vehicle 200.
[0086] [0086] The current state 238 of the vehicle 200 can be identified using the state identification systems 242. The state identification systems 242 can include any of the devices or systems for identifying the current values of various attributes of the vehicle 200 as vehicle 200 moves in the area near point of interest 202. For example, without limitation, state identification systems 242 may include a global positioning system based on satellite or another system for identifying the current location of vehicle 200. Status identification systems 242 may also include an altimeter for identifying the current altitude of vehicle 200. In addition, status identification systems 242 may include other systems and devices for identifying the current values of other attributes of vehicle 200 that define the current state 238.
[0087] [0087] The values of the various attributes that define the current state 238 of the vehicle 200 can be provided from status identification systems 242 to the automatic navigation system 204 for use in selecting the next state 240 of the vehicle 200. Some or all information identifying the current status 238 of the vehicle 200 can also be provided from status identification systems 242 to operator 230. For example, without limitation, such information can be provided from status identification systems 242 to remote control system 232 by means of communication systems 220. Such information can then be displayed or otherwise presented to operator 230 in remote control system 232.
[0088] [0088] The automatic navigation system 204 may include the next state selector 244. The next state selector 244 is configured to identify the next state 240 for vehicle 200. According to an illustrative modality, the next selector state 244 is configured to select next state 240 from possible states 246 based on current state 238 of vehicle 200, as identified by state identification systems 242 and next state restrictions 248.
[0089] [0089] Possible states 246 are states for vehicle 200 to which vehicle 200 can be moved. Possible states 246 can be defined by the various attributes 250 of vehicle 200 and several possible values 252 for attributes 250. For example, without limitation, attributes 250 can include the location, speed and altitude of vehicle 200. In this case, the values 252 may include values that indicate various possible locations of vehicle 200 in the area near point of interest 202 and various possible speeds and altitudes for vehicle 200 in those locations. Possible states 246 can include other attributes or additional attributes 250 for vehicle 200. Values 252 for attributes 250 of possible states 246 can be ranges of values.
[0090] [0090] Possible states 246 can be further defined by state values 253. State values 253 are values associated with each of the possible states 246. State values 253 can indicate the convenience of selecting individual possible states 246 for next state 240. As will be discussed in more detail below, state values 253 can be determined based on weights associated with attributes 250 of possible states 246. Weights can indicate the amount of time since vehicle 200 was in each one of the possible states 246 and reflects the number of times that the vehicle 200 had certain values 252 of the attributes 250 associated with those possible states 246.
[0091] [0091] According to an illustrative modality, the next state selector 244 can identify next possible states for the vehicle 200 from possible states 246. Such next possible states can be identified as one of the possible states 246 for which the vehicle 200 can be moved from the current state 238, given the various next state restrictions 248. The next state restrictions 248 can include several restrictions on the movement of the vehicle 200 due to the operational capabilities of the vehicle 200 and as may be required for performance of a private mission in the area near point of interest 202.
[0092] [0092] Next state restrictions 248 may include requirements for maintaining vehicle 200 within a certain distance range or in another relationship with respect to point of interest 202. Point of interest 202 may be immovable 262 or mobile 264. The point of interest location detector 266 can be used to identify the current location of point of interest 202. The location of point of interest 202 as identified by point of interest location detector 266 can be provided to the next state selector 244 for use in the selection of the next state 240 which satisfies the next state restrictions 248 related to the current position of the point of interest 202. The location of the point of interest 202 identified by the point of interest location detector 266 can still be provided to the operator 230. For example, the location of point of interest 202 can be provided to operator 230 in remote control system 232 via and communication systems 220.
[0093] [0093] Next state restrictions 248 can be established before vehicle 200 takes on a mission in an area near point of interest 202. Some or all next state restrictions 248 can be changed while vehicle 200 is still in operation in an area close to point of interest 202. For example, without limitation, next state restrictions 248 can be changed by operator 230 during operation of vehicle 200 in the area close to point of interest 202. Such changes to next state restrictions 248 can be provided by operator 230 from remote control system 232 to automatic navigation system 204 via communication systems 220. For example, without limitation, next state restrictions 248 may include an air exclusion zone through from which vehicle 200 must not be moved while carrying out a mission. In that case, the no-fly zone can be established, changed, or canceled by operator 230 while vehicle 200 is at the location that performs a mission close to point of interest 202.
[0094] [0094] As another example, some or all of the next state restrictions 248 can be changed automatically while vehicle 200 is in operation in an area close to point of interest 202. For example, without limitation, next state restrictions 248 can include a no-fly zone that is associated with a bad weather area or other moving condition or otherwise changing that can be tracked automatically. In that case, the no-fly zone can be changed automatically while vehicle 200 is in operation in response to changes detected in the underlying condition on which the no-fly zone is based.
[0095] [0095] The next state selector 244 can identify one or more of the possible states 246 that satisfy the next state restrictions 248 and that have the best state values 253. In this case, the best state values 253 indicate the possible states 246 which are best selections for the next state 240. A list that identifies one or more of the possible states 246 that satisfy the next state restrictions 248 and that share the best state values 253 can be temporarily saved by the next state selector 244 as a list of best next states 254. State values 253 of possible states 246 included in the list of best next states 254 can be temporarily stored by the next state selector 244 as the value for the next best state 256.
[0096] [0096] If a list of best next states 254 identifies only one of the possible states 246 with the value of best next state 256, that one of the possible states 246 is selected by the next state marker 244 to be the next state 240. However , if a list of best next states 254 identifies more than one of the possible states 246 with the same value as best next state 256, the next state selector 244 can randomly select one of the possible states 246 identified in the list of best next states 254 to be next state 240. The next state selector 244 can include randomizer 258 to randomly select one of the possible states 246 identified in the list of best next states 254 to be next state 240. For example, without limitation, randomizer 258 can include a pseudo random number generator.
[0097] [0097] Values 252 for attributes 250 of possible states 246 can be identified as ranges of values. According to an illustrative embodiment, the next state selector 244 can randomly select specific values for the attributes of the next state 240 from ranges of values 252 provided for the attributes 250 of one of the possible states 246 selected to be the next state 240 For example, without limitation, the next state selector 244 can use randomizer 258 to select specific values for the attributes of the next state 240 from ranges of values 252.
[0098] [0098] After selecting the next state 240, the appropriate control signals can be generated and supplied from the automatic navigation system 204 to the propulsion systems 216 and control systems 218 to automatically control the movement of the vehicle 200 from the current state 238 for the next state 240. Alternatively, the next state 240 can be provided as a recommendation from the automatic navigation system 204 to operator 230. For example, without limitation, the information identifying the values of the various attributes of the next state 240 can be provided from the automatic navigation system 204 to operator 230 by means of communication systems 220. In that case, operator 230 can manually control the movement of vehicle 200 from current state 238 to next state 240.
[0099] [0099] After identifying the next state 240, the next state selector 244 can trigger the state value updater 260 to update the one of the state values 253 associated with the one of the possible states 246 that has been selected to be the next state 240. For example, state value updater 260 can be configured to update one of state values 253 by changing the weights associated with the various attributes 250 to the one of possible states 246 that has been selected to be the next state 240. In any case, the one of the state values 253 associated with the one of the possible states 246 that has been selected to be the next state 240 can be updated to change the one of the state values 253 in a way that the one of the possible states 246 that was selected to be the next state 240 is less likely to be selected again by the next state selector 244 to be the next state 240. Therefore, the value updater d and state 260 can update state values 253 of possible states 246 in such a way that the automatic navigation system 204 can control the movement of vehicle 200 in a non-repetitive pattern.
[0100] [00100] The automatic navigation system 204 can be engaged and disengaged several times while the vehicle 200 is carrying out a mission in an area close to the point of interest 202. According to an illustrative modality, the automatic navigation system 204 can control the movement of vehicle 200 after automatic navigation system 204 is re-engaged in such a way that any part of the movement pattern of vehicle 200 as controlled by automatic navigation system 204 before automatic navigation system 204 which has been disengaged is not likely to be repeated. For example, without limitation, for any particular mission in the area near point of interest 202, state values 253 that are determined after operator 230 takes steps to engage automatic navigation 234 can be maintained at their current values when the operator 230 takes steps to disengage automatic navigation 236. The state values 253 that are maintained at their current values in this way can be used again by the next state selector 244 and updated by the state value updater 260, as described above , when operator 230 takes steps to engage automatic navigation 234 again. As another example, operator 230 may be able to restore state values 253 from possible states 246 to initial values at any time.
[0101] [00101] The illustration in Figure 2 does not imply physical or architectural limitations to the way in which the different illustrative modalities can be implemented. Components other than, in place of, or both in addition to and in place of those illustrated can be used. Some components may be unnecessary in some illustrative modalities. In addition, the blocks are presented to illustrate some functional components. One or more of these blocks can be combined or divided into different blocks when deployed in different illustrative modalities.
[0102] [00102] For example, various functions of the automatic navigation system 204 and remote control system 232, as described in this document, can be implemented in various data processing systems. Such data processing systems can be located on vehicle 200, not located on vehicle 200, but in communication with vehicle 200 via communication systems 220, or both. The various functions of the automatic navigation system 204 and remote control system 232, as described in this document, can therefore be performed on vehicle 200, not vehicle 200, or both on vehicle 200 and not vehicle 200 in several combinations.
[0103] [00103] Now returning to Figure 3, an illustration of a block diagram of a vehicle's state is represented, according to an illustrative modality. In this example, state 300 is an example of one of the possible states 246 in Figure 2.
[0104] [00104] State 300 can be defined by several attributes 302. Attributes 302 can include several characteristics of a vehicle that can be related to the movement of the vehicle in an area close to a point of interest. For example, without limitation, attributes 302 can include location 304, altitude 306, speed 308, visit time 310, orientation 311, radar cross section 312, IR signature 314, acoustic signature 316, visual profile 318, other attributes 320, or various combinations of attributes.
[0105] [00105] Location 304 may be expressed in terms of latitude 322 and longitude 324, or in another appropriate manner. According to another example, location 304 can be represented as a displacement, for example, the distance and direction of the vehicle from the point of interest. Visit time 310 may include an indication of the last time state 300 was in the current state for the vehicle. Orientation 311 can refer to the orientation of the vehicle in two or three dimensional spaces. For example, without limitation, for an air vehicle, guidance 311 may include pitch, roll, and bypass. The radar cross section 312, the IR signature 314, the acoustic signature 316, the visual profile 318 and other attributes 320 can be related to an observer at or near the point of interest or in another location.
[0106] [00106] Some of the attributes 302 can be functions of others of the attributes 302. For example, without limitation, the radar cross section 312, the IR signature 314, the acoustic signature 316, the visual profile 318 and other attributes 320, can be guidance functions 311 or other attributes 302 or various combinations of attributes 302.
[0107] [00107] Attribute values 325 can be specified for each of the attributes 302 associated with state 300. Attribute values 325 can be specific values or can be expressed as a value range 326, or both. For example, without limitation, location 304 can be defined by the value range 326 that defines an area around a specific point location.
[0108] [00108] Attribute weights 327 can also be associated with attributes 302. Attribute weights 327 can indicate how many times one of attribute values 325 has been selected for attributes 302 of state 300 when state 300 is selected to be the next state for a vehicle. Attribute weights 327 can be updated each time state 300 is selected to be the next state for the vehicle.
[0109] [00109] Attribute weights 327 can be used to identify state value 328 to state 300. State value 328 indicates the convenience of selecting state 300 to be the next state for a vehicle. For example, without limitation, state value 328 can be changed when state 300 is selected to be the next state for the vehicle in a way that reduces the likelihood that state 300 will be selected again to be the next state for the vehicle. vehicle. The state value 328 can be used by an automatic navigation system, in accordance with an illustrative modality, to select the next state for the vehicle in such a way that the vehicle can be moved automatically in a non-repetitive pattern in an area close to a location of interest.
[0110] [00110] Now, returning to Figure 4, an illustration of several possible states for a vehicle is represented, in accordance with an illustrative modality. In this example, possible states 400 are representations of an example of an implementation of possible states 246 in Figure 2.
[0111] [00111] In this example, possible states 400 represent possible locations for a vehicle in an area close to a point of interest. Possible states 400 can be identified by dividing the area near the point of interest into a two-dimensional grid. The size of each segment of the grid can be selected, for example, based on the operational capabilities of the vehicle. Each segment of the grid, therefore, represents a possible state for the vehicle in the area near the point of interest.
[0112] [00112] In this example, the attribute values for the location attributes of the possible 400 states are a range of values around a location in the center of each segment of the grid. For example, shaded area 404 around center point 402 defines the attribute value for the location attribute of one of the possible states 400. Therefore, when that one of the possible states 400 is selected to be the next state for the vehicle , the specific value of the location attribute for the next state to which the vehicle will be moved can be selected at random from any location within the shaded area 404.
[0113] [00113] Now returning to Figure 5, an illustration of a function for identifying the value of a vehicle's state is represented, in accordance with an illustrative modality. In this example, function 500 is an example of deploying a function to identify state values 253 of possible states 246 in Figure 2.
[0114] [00114] According to an illustrative modality, the value of a state is used to identify the convenience of selecting that state as the next state for a vehicle. In this example, the value of a state can be a function of weights that are assigned to the various attributes of the state. For example, the state value can be determined as the function 500 of the sum 502 of the weights 504, 506 and 508 associated with the altitude attributes of the state, the sum 510 of the weights 512, 514 and 516 associated with the speed attributes of the state, the Dirac delta function 526 of weights 520, 522, and 524 associated with time since the last time the vehicle was in the state and 528 weight functions associated with other state attributes. Weights 504, 506, 508, 512, 514, 516, 520, 522 and 524 can be selected and changed in such a way that the state value, as determined by function 500, changes each time the state is selected for be the next state for the vehicle to indicate that the state is less desirable to be selected again to be the next state for the vehicle. Function 500 can be a function of plus, minus or different weights and functions of those weights than those shown by way of example in Figure 5 and described by way of example in this document.
[0115] [00115] For example, without limitation, a lower state value may indicate that a state is more desirable to be selected as the next state for a vehicle. In this example, the values of the appropriate weights of 504, 506, 508, 512, 514 and 516 can be increased by a selected amount when the corresponding attributes associated with these weights are selected as attributes for the next vehicle state.
[0116] [00116] In this example, the Dirac 526 delta function value can be the value of one of the 520, 522, and 524 weights associated with once since the last time the vehicle was in the state. The weight 520 can be given a first value and associated with a first range of times. For example, without limitation, the weight 520 can be given a value of 5 or another appropriate value and can be associated with a range of times from approximately 1 to 10 seconds or another range of times. Weight 522 can be given a second value that is less than the first value of weight 520 and can be associated with a second range of times that is greater than the first range of times associated with weight 520. For example, without limitation, the weight 522 can be given a value of 3 or another appropriate value and can be associated with a range of times from approximately 10 to 60 seconds or another range of times. Weight 524 can be given a third value that is less than the second value of weight 522 and can be associated with a third range of times that is greater than the second range of times associated with weight 522. For example, without limitation, weight 524 can be given a value of 1 or another appropriate value and associated with the range of times for 60 seconds or another range of times. In this case, if the time since the last time the vehicle was in the state is 11 seconds, then the value for the Dirac 526 delta function used to determine the value of function 500 for the current time will be 3, the weight value 522 .
[0117] [00117] By selecting and adjusting weights 504, 506, 508, 512, 514, 516, 520, 522 and 524 in the manner described, the state value identified by function 500 is increased each time the state is selected to be the next state for the vehicle, thereby making it less likely that the state will be selected again to be the next state for the vehicle. Therefore, the state values identified using function 500 can be used to control the movement of a vehicle in an area close to a point of interest in an unpredictable manner in which the probability that the vehicle will be controlled to move in the same state is repeatedly reduced.
[0118] [00118] Turning now to Figure 6, an illustration of a flow-gram of a process for controlling the state of a vehicle is represented, in accordance with an illustrative modality. The process of Figure 6 can be implemented, for example, in the automatic navigation system 204 in Figure 2.
[0119] [00119] The process begins by identifying the current state of a vehicle (operation 602). For example, operation 602 may include identifying the current location and other attributes of the vehicle. All next possible states for the vehicle are identified (operation 604). For example, operation 604 may include identifying all of the next possible states for the vehicle among all possible states for the vehicle. All next possible states may include a subset of all possible states from which the vehicle can be moved from the current state that satisfies any of the next state restrictions.
[0120] [00120] A next state for the vehicle is selected from among the next possible states identified (operation 606). Operation 606 may include the use of state values for the next possible states that indicate the convenience of selecting the next possible states to be the next state for the vehicle. The attribute values for the state selected to be the next state can be provided as ranges of values. In this case, the specific attribute values for the next state can be selected at random from the supplied value ranges (operation 608). The state value of the state selected to be the next state can be updated (operation 610). For example, operation 610 may include changing weights associated with the various state attributes selected to be the next state. The vehicle can then be automatically controlled to move from the current state to the next selected state (operation 612), and the process ends after that.
[0121] [00121] Now returning to Figure 7, an illustration of a flow-gram of a process for selecting the next state of a vehicle is represented, in accordance with an illustrative modality. In this example, the process illustrated in Figure 7 is an example of a process for performing operation 606 in Figure 6.
[0122] [00122] The process begins by selecting a next possible state from all the next possible states that have been identified (operation 700). It can then be determined whether the next state is a dead end (operation 702). Operation 702 may include an anticipation operation to determine whether the next possible state, under consideration, is a next state for the vehicle from which the vehicle cannot be moved to any subsequent subsequent state. Operation 702 may include looking at one or more steps in the future to determine whether the next possible state under consideration may ultimately result in a dead end state for the vehicle from which the vehicle cannot be moved to any subsequent subsequent state.
[0123] [00123] If the next possible state under consideration is not a dead end, then it can be determined whether the next possible state under consideration satisfies the various restrictions of the next state (operation 704). If it is determined that the next possible state under consideration satisfies the next state restrictions, the state value of the next possible state under consideration is identified (operation 706). The state value can indicate the convenience of selecting the next possible state under consideration to be the next state for the vehicle.
[0124] [00124] It is then determined whether the identified state value of the next possible state under consideration is better than the best current value (operation 708). If it is determined that the value of the next possible state under consideration is better than the best current value, the best state value is restored to the identified value of the next possible state under consideration (operation 710), the list of best next states is cleared (operation 712), and the next possible state under consideration is added to the list of best next states (operation 714). It can then be determined whether there are any of the closest possible states to be considered (operation 716).
[0125] [00125] Again, with reference to operation 708, if it is determined that the identified state value of the next possible state under consideration is no better than the best current value can be determined if the state value of the next possible state under consideration is equal to the best current state value (operation 724). If it is determined that the state value of the next possible state under consideration is equal to the best current state value, the process proceeds to operation 714 in which the next possible state under consideration is added to the list of best next states. It can then be determined whether there are any of the closest possible states to be considered (operation 716).
[0126] [00126] Returning to operations 702, 704 and 724, if it is determined that the next possible state under consideration is a dead end, that the next possible state under consideration does not meet the next state restrictions, or that the state value of next possible state under consideration is not better than or equal to the best current state value, the next possible state under consideration cannot be selected as the next state for the vehicle. In such cases, the process proceeds directly to operation 716 to determine if there are any of the closest possible states to consider.
[0127] [00127] If it is determined in operation 716 that there are more next possible states to be considered, the process returns to operation 700 and another of the next possible states is selected for consideration. Otherwise, if it is determined that there are not any of the closest possible states to be considered, it can be determined whether there is more than one state in the list of best next states (operation 718). If there is no more than one state in the list of best next states, the only state in the list is selected as the next state (operation 720), and the process ends after that. Again, with reference to operation 718, if it is determined that there is more than one state in the list of best next states, the next state for the vehicle can be selected at random from the various states in the list of best next states (operation 722) , and the process ends after that.
[0128] [00128] Now returning to Figure 8, an illustration of a distribution of selected altitude values for the state of a vehicle is shown, in accordance with an illustrative modality. The illustration in Figure 8 shows a number of times that various altitudes are selected for an air vehicle in a particular state by simulating an automatic navigation system, in accordance with an illustrative modality. In this example, the attribute value for the vehicle status altitude attribute is provided as altitude ranges around 1,000 increments from 0.3048 m (ft) from 2134 m to 2438 m (7,000 ft to 8,000 ft). The particular altitude for the vehicle in the state is selected at random from the supplied altitude ranges. As a result, the vehicle's specific altitude in the state is unpredictable.
[0129] [00129] Now returning to Figure 9, an illustration of a distribution of selected speed values for the state of a vehicle is shown, in accordance with an illustrative modality. The illustration in Figure 9 shows a number of times that several speeds were selected for an air vehicle in a particular state by simulating an automatic navigation system, in accordance with an illustrative modality. In this example, the attribute value for the vehicle state speed attribute is given as speed ranges around 5 increments of 1.85 km / h (knot) from 102 km / h to 120 km / h (55 65 knots). The particular speed for the vehicle in the state is selected at random from the given speed value ranges. As a result, the specific speed of the vehicle in the state is unpredictable.
[0130] [00130] Now returning to Figure 10, an illustration of a simulated flight path for an aerial vehicle is shown, in accordance with an illustrative modality. In this example, flight path 1000 illustrates a flight path for an air vehicle that performs a mission in an area close to a point of interest as generated by a simulation of an automatic navigation system, in accordance with an illustrative modality.
[0131] [00131] It can be seen that the flight path 1000 is confined within the altitude and location restrictions that can be defined for the particular mission that is performed by the air vehicle. However, within these restrictions, the flight path is variable and unpredictable. An aircraft that follows flight path 1000 is less likely to be detected by hostile forces than an aircraft that moves on a more predictable and repetitive flight path around the point of interest.
[0132] [00132] Now returning to Figure 11, an illustration of a block diagram of a data processing system is shown, in accordance with an illustrative modality. In this example, the data processing system 1100 is an example of a data processing system deployment for deploying the automatic navigation system 204 in Figure 2.
[0133] [00133] In this illustrative example, data processing system 1100 includes communications structure 1102. Communications structure 1102 provides communications between processing unit 1104, memory 1106, persistent storage 1108, communications unit 1110, input / output (I / O) 1112 and display 1114. Memory 1106, persistent storage 1108, communications unit 1110, input / output (I / O) 1112 and display 1114 are examples of features accessible by processing unit 1104 by means of communications structure 1102.
[0134] [00134] Processing unit 1104 is used to execute instructions for software that can be loaded into memory 1106. Processing unit 1104 can be multiple processors, a multiprocessor core, or some other type of processor, depending on the particular implementation . In addition, processing unit 1104 can be deployed using several heterogeneous processor systems in which a main processor is present with secondary processors on a single chip. According to another illustrative example, processing unit 1104 can be a symmetric multiprocessor system containing multiple processors of the same type.
[0135] [00135] Memory 1106 and persistent storage 1108 are examples of storage devices 1116. A storage device is any piece of hardware that has the capacity to store information, such as, for example, without limitation, data, program code in functional form and other appropriate information on a temporary or a permanent basis. Storage devices 1116 can also be referred to as computer-readable storage devices in these examples. Memory 1106, in these examples, can be, for example, a random access memory or any other suitable volatile or non-volatile storage device. Persistent storage 1108 can take many forms, depending on the particular deployment.
[0136] [00136] For example, persistent storage 1108 can contain one or more components or devices. For example, persistent storage 1108 can be a hard disk, a flash memory, a rewritable optical disk, a rewritable magnetic tape, or some combination of the above. The medium used by 1108 persistent storage can be further removable. For example, a removable hard drive can be used for 1108 persistent storage.
[0137] [00137] The communications unit 1110, in these examples, provides communications with other data processing systems or devices. In these examples, the communications unit 1110 is a network interface card. The communications unit 1110 can provide communications through the use of either or both wireless and physical communications links.
[0138] [00138] The 1112 input / output unit allows data input and output with other devices that can be connected to the 1100 data processing system. For example, the 1112 input / output unit can provide a connection for input user via a keyboard, mouse and / or some other suitable input device. In addition, the 1112 input / output unit can send output to a printer. Display 1114 provides a mechanism for displaying information to a user.
[0139] [00139] The instructions for the system, applications and / or operating programs can be located on storage devices 1116, which are in communication with the processing unit 1104 through the communication structure 1102. In these illustrative examples, the instructions are in a functional form in persistent storage 1108. These instructions can be loaded into memory 1106 for execution by processing unit 1104. Processes of the different modalities can be performed by processing unit 1104 using computer-implemented instructions, which can be be located in a memory, such as memory 1106.
[0140] [00140] These instructions are referred to as program instructions, program code, computer use program code, or computer readable program code that can be read and executed by a processor in processing unit 1104. The program code in different modalities it can be incorporated in a different computer-readable or physical storage medium, such as memory 1106 or persistent storage 1108.
[0141] [00141] Program code 1118 is located in a functional form in the computer-readable medium 1120 which is selectively removable and can be loaded into or transferred to the data processing system 1100 for execution by the processing unit 1104. The program code 1118 and computer-readable medium 1120 form computer program product 1122 in these examples. In one example, computer-readable medium 1120 can be computer-readable storage medium 1124 or computer-readable signal medium 1126.
[0142] [00142] Computer readable storage medium 1124 may include, for example, an optical or magnetic disk that is inserted or placed in a drive or other device that is part of the persistent storage 1108 to transfer to a storage device, such as a hard drive, which is part of the persistent storage 1108. The computer readable storage medium 1124 can take the form of persistent storage, such as a hard drive, a flash drive, or a flash memory, which is connected to the system 1100 data processing device. In some cases, the computer readable storage medium 1124 may not be removable from the data processing system 1100.
[0143] [00143] In these examples, the computer-readable storage medium 1124 is a physical or tangible storage device used to store program code 1118 instead of a medium that propagates or transmits program code 1118. The human-readable storage medium computer 1124 is further referred to as a computer-readable tangible storage device or a computer-readable physical storage device. In other words, the computer-readable storage medium 1124 is a medium that can be touched by an individual.
[0144] [00144] Alternatively, the program code 1118 can be transferred to the data processing system 1100 using the computer readable signal medium 1126. The computer readable signal medium 1126 can be, for example, a data signal propagated containing program code 1118. For example, the computer readable signal medium 1126 may be an electromagnetic signal, an optical signal and / or any other suitable type of signal. These signals can be transmitted over communications links, such as wireless communications links, fiber optic cable, coaxial cable, a wire and / or any other suitable type of communications link. In other words, the communications link and / or the connection can be physical or wireless in the illustrative examples.
[0145] [00145] In some illustrative embodiments, program code 1118 may be downloaded over a network to the persistent storage 1108 of another device or data processing system via computer readable signal medium 1126 for use within the data processing system. 1100 data. For example, program code stored on a computer-readable storage medium on a server data processing system can be downloaded over a server network to the 1100 data processing system. The data processing system The data that the 1118 program code provides can be a server computer, a client computer, or some other device that has the ability to store and transmit the 1118 program code.
[0146] [00146] The different components illustrated for the 1100 data processing system are not intended to provide architectural limitations to the way in which different modalities can be implemented. The different illustrative modalities can be implemented in a data processing system that includes components in addition to and / or in place of those illustrated for the 1100 data processing system. Other components shown in Figure 11 can be varied from the illustrative examples shown. The different modalities can be implemented using any device or hardware system that has the capacity to execute the program code. As an example, the 1100 data processing system may include organic components integrated with inorganic components and / or may be comprised entirely of organic components that exclude a human being. For example, a storage device can be comprised of an organic semiconductor.
[0147] [00147] In another illustrative example, processing unit 1104 may take the form of a hardware unit that has circuits that are manufactured or configured for a particular use. This type of hardware can perform operations without requiring the program code to be loaded into the memory of a storage device to be configured to perform the operations.
[0148] [00148] For example, when processing unit 1104 takes the form of a hardware unit, processing unit 1104 can be a circuit system, an application specific integrated circuit (ASIC), a programmable logic device, or some another suitable type of hardware configured to perform various operations. With a programmable logic device, the device is configured to perform the number of operations. The device can be reconfigured at a later time or it can be permanently configured to perform the number of operations. Examples of programmable logic devices include, for example, a programmable logic matrix, a programmable matrix logic, a field programmable logic matrix, a field programmable gate matrix, and other suitable hardware devices. With this type of deployment, program code 1118 can be omitted, due to the fact that the processes for the different modalities are deployed on a hardware unit.
[0149] [00149] In yet another illustrative example, processing unit 1104 can be deployed using a combination of processors found in computers and hardware units. Processing unit 1104 can have multiple hardware units and multiple processors that are configured to run program code 1118. With this example depicted, some of the processes can be deployed on the various hardware units, while other processes can be deployed on the various processors .
[0150] [00150] In another example, a bus system can be used to deploy communications structure 1102 and can be comprised of one or more buses, such as a system bus or an input / output bus. Of course, the bus system can be deployed using any suitable type of architecture that provides data transfer between different components or devices attached to the bus system.
[0151] [00151] Additionally, the communications unit 1110 can include several devices that transmit data, receive data, or that both transmit and receive data. The communications unit 1110 can be, for example, a modem or a network adapter, two network adapters, or some combination thereof. In addition, a memory may be, for example, memory 1106, or a cache, such as that found in a memory controller and interface center that may be present in communications structure 1102.
[0152] [00152] The flowcharts and block diagrams described in this document illustrate the architecture, functionality and operation of possible deployments of computer program systems, methods and products according to various illustrative modalities. In this regard, each block in the flowcharts or block diagrams can represent a module, segment or piece of code that comprises one or more executable instructions for implementing the specified logic function or functions. It should also be noted that in some alternative implementations, the functions observed in a block can occur outside the order observed in the figures. For example, the functions of two blocks shown in succession can be performed substantially and concurrently, or the functions of the blocks can be performed, sometimes in reverse order, depending on the functionality involved.
[0153] [00153] The description of the different illustrative modalities has been presented for purposes of illustration and description and is not intended to be exhaustive or to limit the modalities in the manner described. Many modifications and variations will be evident to those skilled in the art. In addition, different illustrative modalities can provide different benefits when compared to other illustrative modalities. The modality or modalities selected are chosen and described in order to better explain the principles of the modalities, their practical application and enable other individuals versed in the technique to understand the description for various modalities with various modifications as appropriate to the particular use contemplated.
权利要求:
Claims (14)
[0001]
Method for automatically controlling the movement of a vehicle (200) in an unpredictable way, characterized by the fact that it comprises: identifying a current state (238) of the vehicle (200), wherein the current state (238) comprises a current location of the vehicle (200); selecting, by a processing unit (1104), a next state (240) for the vehicle (200), wherein the next state (240) comprises a next location for the vehicle (200); randomly select a value for an attribute of the next state (240) of the vehicle (200); and controlling the movement of the vehicle (200) to move the vehicle (200) from the current state (238) to the next state (240); where selecting the next state (240) for the vehicle (200) comprises: identify next possible states for the vehicle (200); identify state values (253) for the next possible states; and select the next state (240) from the next possible states using the state values (253).
[0002]
Method according to claim 1, characterized by the fact that the attribute of the next state (240) of the vehicle (200) is selected from the next location for the vehicle (200), an altitude (306) of the vehicle (200 ), a vehicle speed (308) (200), and a vehicle orientation (311) (200).
[0003]
Method, according to claim 1 or 2, characterized by the fact that identifying the state values (253) for the next possible states comprises identifying the times when the vehicle (200) was in the next possible states.
[0004]
Method according to claim 1 or 2, characterized by the fact that selecting the next state (240) comprises: randomly select the next state (240) from several of the next possible states that have better state values (253).
[0005]
Method according to claim 1 or 2, characterized by the fact that it additionally comprises: change one of the state values (253) to one of the next possible states selected to be the next state (240) in response to the selection of the next state (240).
[0006]
Method according to any one of claims 1 to 5, characterized in that the vehicle (200) is selected from an unmanned vehicle (214), a manned vehicle (212), an air vehicle (206) and a submarine vehicle (211).
[0007]
Method according to claim 1, characterized by the fact that the vehicle (200) is selected from a land vehicle (208) and a marine vehicle (210), in which the next state attribute (240) of the vehicle (200) is selected from the next location for the vehicle (200), a vehicle speed (308) (200) and a vehicle orientation (311) (200).
[0008]
Apparatus for automatically controlling the movement of a vehicle (200) in an unpredictable way, characterized by the fact that it comprises: a processing unit (1104) configured to: identifying a current state (238) of the vehicle (200), wherein the current state (238) comprises a current location of the vehicle (200); selecting a next state (240) for the vehicle (200), where the next state (240) comprises a next location for the vehicle (200); and randomly select a value for an attribute of the next state (240) of the vehicle (200); where the processing unit (1104) is additionally configured to: identify next possible states for the vehicle (200); identify state values (253) for the next possible states; and select the next state (240) from the next possible states using the state values (253).
[0009]
Apparatus according to claim 8, characterized by the fact that the attribute of the next state (240) of the vehicle (200) is selected from the next location for the vehicle (200), an altitude (306) of the vehicle (200 ), a vehicle speed (308) (200) and a vehicle orientation (200).
[0010]
Apparatus according to claim 8 or 9, characterized by the fact that the processing unit (1104) is configured to identify the status values (253) for the next possible states by identifying the times when the vehicle (200 ) was in the next possible states.
[0011]
Apparatus according to claim 8 or 9, characterized in that the processing unit (1104) is configured to randomly select the next state (240) from several of the next possible states that have better state values (253).
[0012]
Apparatus according to claim 8 or 9, characterized by the fact that the processing unit (1104) is additionally configured to change one of the state values (253) to one of the next possible states selected to be the next state (240) in response to the selection of the next state (240).
[0013]
Apparatus according to any one of claims 8 to 12, characterized in that the vehicle (200) is selected from an unmanned vehicle (214), a manned vehicle (212), an air vehicle (206) and a submarine vehicle (211).
[0014]
Apparatus according to claim 8, characterized by the fact that the vehicle (200) is selected from a land vehicle (208) and a marine vehicle (210), in which the attribute of the next state (208) of the vehicle (200) is selected from the next location for the vehicle (200), a vehicle speed (308) (200) and a vehicle orientation (311) (200).
类似技术:
公开号 | 公开日 | 专利标题
BR102013016268B1|2021-01-12|method and apparatus for automatically controlling the movement of a vehicle in an unpredictable way
JP6374706B2|2018-08-15|System and method for routing in a segregation management system
EP3505871B1|2021-12-22|Management system for unmanned aerial vehicles
BR102015029115B1|2021-07-06|method and apparatus for changing the trajectory of an aircraft for interval management
ES2738199T3|2020-01-20|Flight path management system for capturing solar energy for an aircraft
US10767996B2|2020-09-08|System and methods for reducing the map search space requirements in a vision-inertial navigation system
Atashgah et al.2012|An integrated virtual environment for feasibility studies and implementation of aerial MonoSLAM
Gašparović et al.2016|Unmanned Aerial Photogrammetric Systems in the Service of Engineering Geodesy
JP6767861B2|2020-10-14|Flight control method and unmanned aerial vehicle
WO2019047233A1|2019-03-14|System and method for supporting safe operation of operating object
Rushdi et al.2016|Development of a small-scale autonomous UAV for research and development
May et al.2010|An evolutionary algorithm for multiple waypoints planning with B-spline trajectory generation for Unmanned Aerial Vehicles |
Yang et al.2009|Fast on-ship route planning using improved sparse A-star algorithm for UAVs
Jantawong et al.2018|Automatic landing control based on GPS for fixed-wing aircraft
Elston et al.2010|Design and validation of a system for targeted observations of tornadic supercells using unmanned aircraft
Montella2019|An End-to-End Platform for Autonomous Dynamic Soaring in Wind Shear
Galbersanini2021|Preliminary design of an unmanned light aircraft for humanitarian aids airdrop
Rothkrantz2017|SURVEILLANCE AND PROTECTION BY DRONES
Ebrahimi-Oskoei2014|Swarm of UAVs: Search & Rescue Operationin Chaotic Ship Wakes
Lu et al.2021|Complementary Filter for UAV Control under Complex Flight
Beaudoin et al.2010|Towards symmetrization of asymmetric air dominance: the potential key role playing by home-made low cost Unmanned Aerial Systems
Gutiérrez et al.2012|UAS design requirements for operation in Colombian mountain environments
Carroll2011|Field Tests, Balloons, Aircraft, and Upcoming Missions
Jung et al.2013|Enabling Operational Autonomy for UAVs with Robustness
同族专利:
公开号 | 公开日
JP6411012B2|2018-10-24|
AU2013205428B2|2016-09-15|
AU2013205428A1|2014-01-16|
CA2813184C|2016-01-12|
KR20140000623A|2014-01-03|
JP2014041601A|2014-03-06|
US20130345919A1|2013-12-26|
CN103513654A|2014-01-15|
EP2680096A1|2014-01-01|
US8682521B2|2014-03-25|
CA2813184A1|2013-12-25|
US20140129055A1|2014-05-08|
BR102013016268A2|2015-08-18|
EP2680096B1|2019-04-24|
US9069376B2|2015-06-30|
CN103513654B|2018-03-06|
KR102026485B1|2019-09-27|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

US4024539A|1966-04-15|1977-05-17|General Electric Company|Method and apparatus for flight path control|
EP0381178A1|1989-02-02|1990-08-08|Honeywell Inc.|Method and apparatus for aircraft navigation|
JP3141349B2|1992-02-07|2001-03-05|ソニー株式会社|Mobile control system|
JPH08303995A|1995-05-09|1996-11-22|Mitsubishi Electric Corp|Method for guiding flying object|
JP2000180100A|1998-12-17|2000-06-30|Nec Corp|Unattended night reconnaissance airplane|
US7663333B2|2001-06-12|2010-02-16|Irobot Corporation|Method and system for multi-mode coverage for an autonomous robot|
US7231294B2|2003-10-23|2007-06-12|International Business Machines Corporation|Navigating a UAV|
EP1761741A2|2004-02-06|2007-03-14|Icosystem Corporation|Methods and systems for area search using a plurality of unmanned vehicles|
US7447593B2|2004-03-26|2008-11-04|Raytheon Company|System and method for adaptive path planning|
US7228227B2|2004-07-07|2007-06-05|The Boeing Company|Bezier curve flightpath guidance using moving waypoints|
US7512462B2|2004-11-16|2009-03-31|Northrop Grumman Corporation|Automatic contingency generator|
US9383750B2|2004-12-02|2016-07-05|Lockheed Martin Corporation|System for predictively managing communication attributes of unmanned vehicles|
US9235217B2|2005-10-03|2016-01-12|Sikorsky Aircraft Corporation|Automatic dual rotor speed control for helicopters|
JP2007179394A|2005-12-28|2007-07-12|Funai Electric Co Ltd|Self-propelled cleaner|
US7970532B2|2007-05-24|2011-06-28|Honeywell International Inc.|Flight path planning to reduce detection of an unmanned aerial vehicle|
US8483875B2|2007-06-05|2013-07-09|Koninklijke Philips Electronics N.V.|System as well as a method for controlling a self moving robot|
US8036821B2|2007-12-18|2011-10-11|Honeywell International Inc.|Methods and systems for diminishing the effects of an acoustic signature of vehicles|
EP2261762A3|2009-06-12|2014-11-26|Samsung Electronics Co., Ltd.|Robot cleaner and control method thereof|
US8682521B2|2012-06-25|2014-03-25|The Boeing Company|Unpredictable vehicle navigation|US8682521B2|2012-06-25|2014-03-25|The Boeing Company|Unpredictable vehicle navigation|
JP6329642B2|2013-12-10|2018-05-23|エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co.,Ltd|Sensor fusion|
ES2558732B2|2014-08-05|2016-11-14|Universidad De Alicante|System and method for autonomous flight planning|
CN109388150A|2014-09-05|2019-02-26|深圳市大疆创新科技有限公司|Multi-sensor environment map structuring|
WO2016033795A1|2014-09-05|2016-03-10|SZ DJI Technology Co., Ltd.|Velocity control for an unmanned aerial vehicle|
EP3399381A1|2014-09-05|2018-11-07|SZ DJI Technology Co., Ltd.|Context-based flight mode selection|
JP6509599B2|2015-03-17|2019-05-08|セコム株式会社|Flight robot control system and flight robot|
JP6399570B2|2015-03-31|2018-10-03|エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co.,Ltd|UAV operating method, program, and UAV flight system|
WO2016154940A1|2015-03-31|2016-10-06|SZ DJI Technology Co., Ltd.|Systems and methods for geo-fencing device identification and authentication|
JP6459014B2|2015-03-31|2019-01-30|エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co.,Ltd|Geo-fencing device|
WO2016154949A1|2015-03-31|2016-10-06|SZ DJI Technology Co., Ltd.|Authentication systems and methods for generating flight regulations|
JP6657030B2|2015-07-17|2020-03-04|パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America|Unmanned aerial vehicle, flight control method, flight basic program and forced movement program|
WO2017013840A1|2015-07-17|2017-01-26|パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ|Unmanned flight vehicle, flight control method, flight basic program, and forced movement program|
US10831192B1|2016-09-20|2020-11-10|Piasecki Aircraft Corporation|Control system for an aircraft|
JP2020024475A|2016-12-14|2020-02-13|株式会社自律制御システム研究所|Unmanned aircraft flight management system and flight management method|
CN109808703B|2019-01-15|2021-07-20|北京百度网讯科技有限公司|Remote control method and device for automatic driving automobile|
CN111932949A|2020-07-10|2020-11-13|中国航空工业集团公司西安飞行自动控制研究所|Helicopter search and rescue method and flight management system embedded with search and rescue method|
法律状态:
2015-08-18| B03A| Publication of a patent application or of a certificate of addition of invention [chapter 3.1 patent gazette]|
2018-12-04| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]|
2019-11-05| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]|
2020-10-13| B09A| Decision: intention to grant [chapter 9.1 patent gazette]|
2021-01-12| B16A| Patent or certificate of addition of invention granted|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 24/06/2013, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
申请号 | 申请日 | 专利标题
US13/532,537|2012-06-25|
US13/532,537|US8682521B2|2012-06-25|2012-06-25|Unpredictable vehicle navigation|
[返回顶部]