专利摘要:
The invention relates to a robot (100) adapted to evolve in a defined environment, provided with a plurality of sensors and comprising: a. two monocular digital video cameras (161, 162); b. a vision device (170) comprising a lighting unit and an imaging unit; vs. a LiDAR system (140) mounted on a panoramic platform (145); d. processing and control means comprising a computer and memory means including a map of the defined environment, able to acquire the information from the sensors
公开号:FR3039904A1
申请号:FR1557615
申请日:2015-08-07
公开日:2017-02-10
发明作者:Gamez David Marquez
申请人:Institut de Recherche Technologique Jules Verne;
IPC主号:
专利说明:

The invention relates to a device and a method for detecting obstacles adapted to a mobile robot. The invention is more particularly, but not exclusively, suitable for driving an autonomous robot, operating in a congested environment, with obstacles that can move over time.
An autonomous mobile robot is localized in its environment according to a simultaneous mapping and localization method designated under the general acronym of "SLAM" for "Simultaneous Location And Mapping". The robot locates its position in initial mapping, using proprioceptive sensors, such as an odometer, and by means of environmental sensors such as vision sensors, or by means of beacons. These sensors thus provide idiothetic information and allothetic information that combined, allow him to know its exact position in a mapped environment and thus to predict its trajectory in the next moment, according to this position and said mapping. This principle works perfectly in a stable environment. When the environment is changing, by the appearance of obstacles not mapped initially or by the irruption into the environment of the robot of moving objects such as another robot or an operator, the robot must, to take into account these changes, on the one hand, detect the modifications of this environment and on the other hand integrate these modifications into its cartography, so as to modify its trajectory if necessary, or more generally its behavior, according to the detected modifications. In an industrial production environment such as the assembly of an aircraft structure or a naval structure, the robot environment includes "traps" such as traps, as well as objects or operators with which the robot absolutely can not contact or even approach for security reasons. Also, the detection of obstacles and their integration in the cartography must be carried out at a sufficient distance from said obstacles. Certain obstacles such as hatches or wells, known as negative obstacles, are likely to be crossed if they are covered with a grid and impassable in other circumstances. Thus, the robot must be able to detect an obstacle remotely, from its sensory means, to determine the contours and locate said obstacle in relation to its position and to determine whether or not it can overcome this obstacle. without changing its trajectory. In the case where the obstacle is moving, it must, in addition, estimate its trajectory and speed. These tasks are not easy to implement. Indeed, if a stereoscopic camera, a radar, or a laser scanning system is able to detect and locate an object known in its environment, for example to use it as a bitter and draw an allothetic information from positioning, the individual identification of a priori unknown objects in a scene instantly visualized by such a device, which does not have the cognitive abilities of a higher living being, is a complex task, especially when several obstacles, not initially mapped , are likely to be simultaneously in the field observed by the robot at different distances.
The document WO 2014 152254 describes an on-board device for the detection of obstacles in an environment and a method implementing this device. The method comprises comparing a three-dimensional reference image of the environment obtained from two images whose point of view is shifted, with a current three-dimensional image of this environment obtained by similar means. This information is superimposed so as to reveal the disparities and thus detect changes in the environment and the presence of foreign objects. This technique of the prior art requires heavy computer processing means and the need to obtain reference images of the environment. Thus, this method the prior art remains limited to relatively stable environments, which do not correspond to the applications covered by the invention. The device described in this prior art uses information from distance sensors, obtained by a radar or a scanning by means of a laser, and video information obtained by a stereoscopic camera. These two types of information are combined to self-calibrate and make disparities more robust, thus, according to this prior art, these two types of sensors are used as two independent sources of information, which further justifies their use. for these purposes of self-calibration.
Document US 2008 0009965 describes a method of dynamically detecting the concentric environment of a mobile robot. According to this method of the prior art, the mapping of the place of evolution of the robot is set according to a grid, and each tile of this grid is associated with a probability of occupation, that is to say a probability that said tile is occupied by an object with which the robot would be likely to collide. From the map of the environment and the distribution of the occupation of the grid, the robot calculates a path without collision to move from one point to another. During its movement, the robot scans its concentric environment by means of a plurality of sensors and, according to the information from these sensors, it updates the occupancy probability grid, and consequently modifies its trajectory or his speed of movement. This theoretical principle however comes up against the ability of the robot to interpret the information delivered by the sensors, so that, according to this prior art, the autonomy of the robot is limited and that it is supplemented in the latter. task by a human operator, able to interpret his environment, from the visual information delivered by the robot. Other obstacles pose detection problems by conventional vision or scanning systems, such as so-called negative obstacles such as a hatch in a floor, or obstacles which, according to the point of view, have a small dimension, such as cables, the edge of a wall or a thin obstacle such as a table extending substantially in a plane parallel to the floor on which the robot is evolving.
Document US Pat. No. 8,688,275 describes a robot equipped with two systems for scanning the environment by a laser beam, the median scanning plane of one of the two systems not being parallel to the floor on which said robot is moving. The combination of information from these two environmental recognition devices makes it possible in particular to detect the negative obstacles and the obstacles extending in a plane parallel to the floor.
All of the above mentioned arts apply to relatively simple environments with a low number of obstacles present in the environment. On the other hand, the environment encountered by a robot in an industrial situation, in particular in an aircraft fuselage section during system installation operations, comprises a very large variety of obstacles and a high density of objects in a reduced volume, objects likely to change location in that volume. Thus, beyond the ability to detect different types of obstacles, the evolution in such an environment requires that the robot is able to make the appropriate decisions, otherwise it simply remains motionless. The invention aims to solve the disadvantages of the prior art and for this purpose concerns a robot adapted to evolve in a defined environment, provided with a plurality of sensors comprising: a. two monocular digital video cameras; b. a vision device combining a lighting unit and an imaging unit; vs. a LiDAR system mounted on a panoramic platform; d. processing and control means comprising a computer and memory means comprising a map of the defined environment, able to acquire the information from the sensors.
Thus, the robot object of the invention combines the information from different sensors, each providing a different view of the same concentric environment, to detect the presence of obstacles and determine the nature of these obstacles, to define the behavior to to adopt vis-à-vis said obstacles.
The term LiDAR is applied to an active sensor measuring the round trip delay of a light beam emitted by a laser to determine the position and distance of a target from the transmitter.
An example of a vision device combining a lighting unit and an imaging unit is described in WO2007 / 043036. This type of device is known commercially under the Carminé® brand and used in particular in Kinect® type cameras. The invention is advantageously implemented according to the embodiments and variants described below, which are to be considered individually and in any technically operative combination.
Advantageously, the angular resolution of the LiDAR robot object of the invention is less than or equal to 0.25 °. Thus, said system is able to detect an obstacle of small thickness, for example a cable of 8 mm in diameter, at a distance of 2 meters.
According to an advantageous embodiment, the plurality of sensors of the robot which is the subject of the invention comprises: e. a second LiDAR system pointing to the ground;
Thus, said second LiDAR is more particularly specialized for the detection of negative obstacles and carries out its concentric scanning in masked time and possibly at a frequency different from the first LiDAR, which also makes it possible to specialize the first LiDAR for particular detection tasks.
Advantageously, the processing and control means of the robot object of the invention comprise in memory the three-dimensional characteristics of all the objects likely to be in the defined environment. Thus, the computer is able to associate the contours acquired during the observation of the concentric environment of the robot to specific objects and to deduce behaviors depending on the nature of these objects.
According to one embodiment of the robot object of the invention, the defined environment is an aircraft fuselage section in a phase of installation of the systems in said section. Thus, this environment is changing as the systems are installed or being installed in said section. Advantageously, the defined environment whose mapping is stored in the memory of the processing and control means of the robot corresponds to a defined phase of the assembly operations and the robot uses its environmental recognition capabilities to take account of the evolution of said environment compared to the initial mapping. The invention also relates to a method for controlling the evolution of a robot according to the invention in a defined congested environment, which method comprises the steps of: i. obtaining and storing in the memory means for processing and controlling the robot, mapping the defined environment in which the robot is evolving; ii. obtaining and storing in the memory of the robot processing and control means a list of objects and the three-dimensional characteristics of said objects that may be in the defined environment of the robot; iii. obtaining and saving in the memory means of the robot a list of tasks to be performed and the location of execution of said tasks in the defined environment; iv. scan the concentric environment of the robot using LiDAR systems; v. processing the data from the scan of step iv) so as to isolate regions of interest; vi. obtain an image of the regions of interest by means of the cameras and the vision device; vii. classify the obstacles present in the environment from the image obtained in step vi) according to their crossing characteristics by the robot; viii. adapt the trajectory the behavior of the robot and function of the information from step vii) and the task to be performed defined in step iii).
Thus the robot object of the invention uses the information from the plurality of sensors to analyze its environment, more particularly the LiDAR systems, insensitive to light and atmospheric conditions allows a first detection and classification of likely obstacles, including at relatively large distances. This first classification makes it possible to concentrate the action of the mink sensors on the areas of interest which are thus observed with a higher resolution. The combined use of the two types of vision means makes it possible to reduce the measurement noise and the amount of data to be processed for a more precise classification of the obstacles.
According to one embodiment, step vii) of the method which is the subject of the invention comprises the sequences consisting in: vii.a filtering the cloud of points corresponding to the data coming from the vision device; VII.B. subsample the point cloud obtained in sequence vii.a); VII.C. eliminate the aberrant singular points of the scatter plot obtained in sequence vii.b); VII.D. determine the normal to the surface at each point of the cloud obtained in sequence vii.c); vii.e classify the normals obtained in sequence vii.d) as normal passable and impassable.
Thus, the different filtering sequences make the calculation faster and the detection of obstacles more robust vis-à-vis in particular "false obstacles" that is to say zones detected as obstacles but which are only artifices measurement.
Advantageously, step vii) of the method which is the subject of the invention comprises sequences consisting of: vii.f combining the images coming from the vision device and those coming from a monocular camera to obtain a precise outline of the objects present in said image ; vii.g identify the objects present in the image by projecting the contours of said objects from the three-dimensional characteristics of said objects obtained in step ii).
Thus, at the end of the sequence vii.g) the robot knows the nature of the obstacles present in its environment. Thus it is possible for him to adapt his behavior according to said objects, for example, respecting at any time a minimum distance from a type of object. The recognition of the objects present in the environment also makes it possible to associate with these objects spatial attributes which are not accessible during the measurement. For example, it makes it possible to associate with an object that would only be seen on the wafer, a length in a direction perpendicular to said wafer. Thus, object recognition is also useful for tracking said objects and their relative trajectory between two successive measurements.
Thus, according to one embodiment, the method which is the subject of the invention comprises, before step viii), the steps of: ix. make a first acquisition according to steps i) to vii); x. make a second acquisition according to steps i) to vii); xi. determine the trajectory of the objects detected in the robot environment.
Thus, the method that is the subject of the invention enables the robot to obtain a dynamic vision of its environment.
Advantageously, the image acquisitions of the environment of steps iv) and vi) of the method that is the subject of the invention are carried out continuously and stored in a buffer memory zone before being processed. Thus, the robot has a frequently updated vision of its environment and combines if necessary several successive acquisitions of this environment to obtain a more robust representation.
According to a particular embodiment of the method which is the subject of the invention, the environment of the robot comprises a second robot sharing the same cartography of the environment, the two robots being able to exchange information, which method comprises a step consisting of: xii. transmit to the second robot the information from step xi).
Thus the robots present in the intervention environment exchange information, which enables them to anticipate events
Advantageously, the method which is the subject of the invention comprises the steps of: xiii. perform a soil sweep using a LiDAR and detect a negative obstacle; xiv. replacing said negative obstacle in the cloud of points obtained during the scanning of step xiii) by a positive virtual obstacle whose contour resumes that of the negative obstacle; xv. integrating said positive virtual obstacle with the cloud of points obtained in step iv).
Thus, the detection and avoidance of a negative obstacle follow the same process as for other obstacles. The invention is explained below according to its preferred embodiments, in no way limiting, and with reference to FIGS. 1 to 3, in which: FIG. 1 shows, in a schematic profile view, an embodiment of a robot according to the invention in its environment; FIG. 2 represents a logic diagram of an exemplary embodiment of the method that is the subject of the invention; - and Figure 3, shows schematically; according to the same view as FIG. 1, the principle of detection of a negative obstacle by the robot and the method which are the subject of the invention.
1, according to an exemplary embodiment, the robot (100) object of the invention comprises a base (110) supporting motorization means (not shown), energy storage means such as a storage battery (Not shown) as well as computer means (not shown) used to process information and drive said robot, including motorization means. The motorization means control displacement means, for example wheels (120) according to this embodiment. Said displacement means comprise propulsive elements and directional elements so as to allow the robot (100) to follow a trajectory defined in its environment. The base supports mechanized tooling means, for example, according to this embodiment, the base supports an articulated arm (130) terminated by an effector (135). According to exemplary embodiments, said effector (135) is suitable for the installation of fasteners such as bolts or rivets, or else it is suitable for producing a continuous or spot weld, or else comprises machining means. such as drilling or boring means or means for milling, grinding or deburring, without these examples being limiting or exhaustive.
The robot (100) object of the invention comprises a plurality of sensors whose information they can acquire, used alone or in combination, allow the robot to know its concentric environment. Thus, according to this embodiment, the robot object of the invention comprises a first LiDAR (140) mounted on a platform (145) for moving said LiDAR in panoramic movements. Said panoramic platform makes it possible to move the LiDAR along two axes, for example a horizontal panoramic axis and a vertical panoramic axis, corresponding to yaw and pitch movements with respect to the optical axis of the LiDAR. According to another embodiment, the panoramic platform makes it possible to move the LiDAR along a vertical panoramic axis (yaw) and around the optical axis of the sensor in a roll motion. The platform (145) includes angular position sensors such that platform orientation information is combined with information from the LiDAR at each scan. A LiDAR consists of a device capable of emitting a laser beam and a sensor capable of detecting such a laser beam. Thus, the LiDAR emits a laser beam and receives on its sensor said beam reflected by a target, that is to say an object in the environment of the robot. The time between the emission of the reception makes it possible to measure the distance of the point on which this reflection takes place and thus to position said point with respect to the LiDAR. By way of non-limiting example, the LiDAR used by the robot which is the subject of the invention is of the Hokuyo UTM-30LX-EX type, making it possible to measure up to a distance of 30 meters and having a resolution of the order millimeter in depth and an angular resolution of 0.25 °. The scanning performed by LiDAR describes a plane sector of 270 ° in 25 ms. The panoramic movement, in particular vertical, makes it possible to obtain a three-dimensional image of the environment. Horizontal panning increases the field of view at short range. The resolution of this LiDAR makes it possible to detect the presence of a cable 8 mm in diameter at 2 meters from the robot. This performance is more particularly adapted to the use of the robot object of the invention in an aeronautical environment, in particular in a fuselage section. The skilled person adapts the resolution of said LiDAR according to the environment of the robot, from the following formula: d = 2D.s n (0/2) where D is the distance at which obstacle must be detected, d is the visible dimension of the obstacle, for example the diameter of the cable and Θ is the angular resolution of the LiDAR, this data being supplied by the manufacturer.
According to an exemplary embodiment, the robot comprises a second LiDAR (150) pointed towards the ground and able to detect obstacles, more particularly negative obstacles (holes) at ground level. By way of non-limiting example, the robot object of the invention uses for this second LiDAR a Hokuyo URG-04LX material. This equipment can scan a flat area between 60 mm and 4 meters from the robot at a scan angle of 240 ° in 100 ms.
Thus LiDAR systems allow to quickly obtain a representation of the environment, including remote distance. Using the reflection of a laser beam that they emit, these devices are not sensitive to the conditions of illumination of objects and are insensitive to atmospheric conditions.
The robot object of the invention comprises two cameras (161, 162) monocular digital distant from each other. According to this exemplary embodiment, the two cameras are the same and are mounted so that their optical axes are parallel and their focal planes in the same plane. Thus the images of said cameras are usable independently of one another or jointly so as to obtain a stereoscopic image of the environment, or to use said cameras as a range finder vis-à-vis specific environmental details robot (100). Advantageously both cameras are color cameras.
Finally, the robot object of the invention comprises a device (170) for vision comprising imaging means and projection means in the infrared spectrum. By way of non-limiting example, the robot object of the invention uses for this purpose a Carminé® device. Such a device projects an infrared light source structured on the environment which allows it to obtain depth information for each elementary point of the image. The set of sensors of the robot object of the invention is placed in a modular manner in different locations on said robot according to the nature of the obstacles encountered on the site in which the robot evolves to optimize the detection of said obstacles.
The robot operates in a defined environment, for example an aircraft fuselage section, or a ship or submarine hull, which environment is the subject of a mapping recorded in the robot memory means, but in which non-mapped objects are likely to appear and collide with said robot. By way of example, these are static obstacles, such as thresholds or hatches (191), tools placed in the environment, in particular by operators, for example a service (192), or objects suspended from the ceiling, eg cables (193). To move in the collision-free environment, the robot must detect these objects and define trajectories or appropriate behavior. For example, it will have to bypass a hatch (191), but it is possible to pass under a cable using an appropriate configuration of the arm (130) manipulator.
Static obstacles are added mobile obstacles (not shown) such as operators or other robots, or cobots. Thus, the robot object of the invention must evaluate its environment quickly and make appropriate decisions, but as an autonomous robot, it has only a limited computing capacity.
The principle of the method that is the subject of the invention consists in using the particularities of each sensor in order to focus the robot's calculation means on the useful zones of the representation of the environment, which allows a fast processing of the information and uses less computing resources.
Thus the LiDAR (140,150) gives an image of the environment rich in depth information but with few details. Conversely, the image from digital video cameras (161, 162) is rich in detail but poorer in depth information. The image from the device (170) of vision is intermediate but allows to enrich in a given area the representation of LiDAR.
FIG. 2, according to initialization steps of the method that is the subject of the invention, a cartography of the evolution medium of the robot is loaded (210) in the memory means of said robot. The robot object of the invention is mainly intended to evolve in a relatively small space spatially like an aircraft fuselage section, for example to perform system installation operations in said section. These operations consist, for example, to install supports in said section and to perform drilling and shore operations, see assisting an operator in these operations. Also, as the systems are installed in the section, the mapping of it evolves. Thus, according to an exemplary implementation of the method of the invention, said mapping loaded in the memory means takes into account the evolution of the progress of the work. According to exemplary embodiments, the cartography is a 3D cartography or a 2D cartography in which the contour of the obstacles is projected.
According to another initialization step (215), all the objects that may be present in the environment of the robot, for example in the fuselage section, are recorded in the memory means. According to an exemplary implementation, each object is associated with an identification code or label, a three-dimensional representation of the object in a reference frame which is linked to it and a security volume describing a polyhedral or ellipsoidal envelope surrounding the object. object away from it and defining an area that should not be crossed for security reasons. The three-dimensional definition of the object is advantageously a simplified definition limited to its outer contours. Advantageously also, the information relating to each object includes color and hue information, thus, by choosing a suitable hue for each object the presence of it is easily detected from a color image. Like cartography, according to a particular embodiment, the list is updated according to the manufacturing phase concerned, so as to limit it to what is strictly necessary.
The mapping of the intervention medium is recorded in the memory means of the robot, the latter is able to locate in this medium at any time using idiothetic information, for example the number of wheel turns propellant wheels and the orientation of the directional wheels, or by a digital compass, and allothetic information, for example by triangulation of tags placed in the intervention environment. These methods are known from the prior art and are not discussed further.
The tasks that the robot is made to perform in the environment and the location of these tasks in the environment are also recorded (216) in the memory means.
In order to determine its concentric environment, according to an exemplary embodiment, the robot simultaneously performs a scanning (221) of the environment by means of the first LiDAR, a scanning (222) of the environment by means of the second LiDAR, when comprises two, image acquisition (223) by means of the vision device, and image acquisition (224) by means of each digital video camera. These acquisitions are recorded (230) in a buffer memory and timestamped in order to be processed. According to a particular embodiment, in order to reduce the noise in the images, several successive acquisitions are averaged.
The first processing (240) consists, from images from one or more LiDARs, in extracting from these images areas of interest. These areas of interest are, for example, areas of acquisition whose outlines differ from the map.
During a second treatment (250) said areas of interest are examined by their representation in the acquisition performed by the viewing means. According to a first processing sequence (251) the corresponding point cloud is filtered from the measured conditions of reflection in the infrared. This first filtering sequence makes it possible to eliminate the remote points of the robot from the processing. According to a second (252) processing sequence, the point cloud is downsampled. In order to reduce the number of points to be treated, the space is meshed according to an appropriate polyhedral mesh, for example a cubic mesh. According to an example of treatment, all the points in the same mesh are replaced by a point positioned at the center of said mesh and taking as value the average of the points or any other appropriate weighting. This treatment reduces the noise and the number of points to be treated. A third (253) processing sequence consists in eliminating from the cloud of points the singular points considered as aberrant. For this purpose, each measurement obtained at a point is compared to the measurement obtained on its closest neighbors by calculating the square of the measurement difference between these points. Assuming that for a given group of points, the distribution of the measure follows a law of χ2, the points located at more than n standard deviations of the mean are eliminated. This technique avoids stochastic variations inside the scatterplot, but it limits the ability to detect sharp edges. According to a fourth processing sequence (254), the orientation of the normals is calculated at each point of the point cloud. The orientation of the norms makes it possible in the course of an analysis step (255) to determine the points and the zones presenting potential difficulties of crossing. Thus, an area is passable when its normal is vertical. When the normal has a horizontal component, the zone presents difficulties of crossing.
During an identification step (260) the image from one or both digital video cameras is superimposed on the image obtained previously. The high resolution of these images makes it possible, in particular, to identify the contours or the hue of the objects likely to be found in the zones presenting crossing difficulties, and, by combining this information with the information relating to the normals to the surfaces, to identify and to label objects in the environment. Thus, from this information the robot is able to decide (270) an appropriate avoidance sequence.
According to an embodiment taking into account the displacement of the objects in the environment, the preceding steps (221 ... 260) are repeated and the acquisitions are correlated (265) so as to determine a speed of movement and a trajectory of the present objects. in the robot environment. Thus, the decision (270) of an avoidance sequence takes this information into account.
3, according to an exemplary implementation, the LiDAR or the second LiDAR (150) of the robot object of the invention is pointed or pointable towards the ground in order to detect a negative obstacle (191). The acquisition made on this occasion constitutes a second cloud of points. In order to treat this point cloud in a process similar to the first, the negative obstacle is replaced, for processing, by a positive virtual obstacle (391) having horizontal normals (390) on its lateral faces. According to an exemplary implementation, said virtual obstacle (391) resumes the contour of the negative obstacle (191) real, but a predefined side wall height, for example 1 meter, so that it is immediately recognizable. Said virtual obstacle is integrated in the scene observed and treated with the remainder of the acquisition from the fourth step of the method object of the invention.
The above description shows that the invention achieves the intended objectives, in particular, the invention allows an autonomous robot to locate accurately in a changing and congested environment. Said robot is thus advantageously used for assembly operations and system installations in particular in an aircraft section or in a naval structure. The autonomous robot object of the invention is particularly suitable for interventions in environments difficult to access or in particular environmental or hazardous conditions such as in the presence of gas, heat or cold or ionizing radiation. According to other modes of implementation, the robot object of the invention comprises means specifically adapted for repair operations on hull or fuselage, said operations comprising additive manufacturing operations. For this purpose the robot object of the invention comprises according to this embodiment: an additive manufacturing effector carried by the articulated arm, for example a molten powder spraying nozzle or a thermoplastic wire extrusion nozzle; - Means for adjusting the position of said effector according to the information from the plurality of sensors, in particular by recognizing the part to be assembled or repaired.
权利要求:
Claims (12)
[1" id="c-fr-0001]
1. Robot (100) adapted to evolve in a defined environment, provided with a plurality of sensors characterized in that it comprises: a. two monocular digital video cameras (161, 162); b. a vision device (170) comprising a lighting unit and an imaging unit; vs. a LiDAR system (140) mounted on a panoramic platform (145); d. processing and control means comprising a computer and memory means including a map of the defined environment, able to acquire the information from the sensors
[2" id="c-fr-0002]
2. The robot of claim 1, wherein the angular resolution of the LiDAR system (140) is less than or equal to 0.25 °.
[3" id="c-fr-0003]
The robot of claim 1, wherein the plurality of sensors comprises: e. a second LiDAR system (150) pointed towards the ground;
[4" id="c-fr-0004]
4. Robot according to claim 1, wherein the processing and control means comprise in memory the three-dimensional characteristics of all objects likely to be in the defined environment.
[5" id="c-fr-0005]
5. Robot according to claim 4, wherein the defined environment is an aircraft fuselage section in a phase of installation of the systems in said section.
[6" id="c-fr-0006]
6. A method for controlling the evolution of a robot according to claim 1 in a defined congested environment, characterized in that it comprises the steps of: i. obtaining (210) and storing in the memory means for processing and controlling the robot, mapping the defined environment in which the robot is evolving; ii. obtaining (215) and storing in the memory of the robot processing and control means a list of objects and the three-dimensional characteristics of said objects likely to be in the defined environment of the robot; iii. obtaining (216) and storing in the memory means of the robot a list of tasks to be performed and the location of execution of said tasks in the defined environment; iv. performing (221) a scan of the concentric environment of the robot using LiDAR systems; v. processing (240) the data from the scan of step iv) so as to isolate regions of interest; vi. obtain an image of the regions of interest by means of the cameras and the vision device; vii. classifying (255) the obstacles present in the environment from the image obtained in step vi) as a function of their crossing characteristics by the robot; viii. adapt (270) the trajectory and the behavior of the robot and function of the information from step vii) and the task to be performed defined in step iii).
[7" id="c-fr-0007]
The method of claim 6, wherein step vii) comprises the steps of: vii.a filtering (251) the scatter plot corresponding to the data from the vision device; VII.B. downsampling (252) the point cloud obtained in sequence vii.a); VII.C. eliminating the aberrant singular points (253) from the scatter plot obtained in sequence vii.b); VII.D. determine (254) the normal at the surface at each point of the cloud obtained in sequence vii.c); vii.e classify (255) the normals obtained in sequence vii.d) into passable normals and normal impassable.
[8" id="c-fr-0008]
8. The method of claim 6, wherein step vii) comprises sequences consisting of: vii.f combining (260) the images from the vision device and those from a monocular camera to obtain a precise outline of objects present in said image; vii.g identify the objects present in the image by projecting the contours of said objects from the three-dimensional characteristics of said objects obtained in step ii).
[9" id="c-fr-0009]
9. The method of claim 6, comprising before step viii) the steps of: ix. make a first acquisition according to steps i) to vii); x. perform a second acquisition according to steps i) to vii); xi. determine (265) the trajectory of the objects detected in the robot environment.
[10" id="c-fr-0010]
The method of claim 6, wherein the image acquisitions of the environment of steps iv) and vi) are performed continuously and stored in a buffer area before being processed.
[11" id="c-fr-0011]
11. The method of claim 6, wherein the robot environment comprises a second robot sharing the same mapping of the environment, the two robots being able to exchange information, and which comprises a step consisting in: xii. transmit to the second robot the information from step xi).
[12" id="c-fr-0012]
The method of claim 6, comprising the steps of: xiii. performing a ground scan using a LiDAR (140, 150) and detecting a negative obstacle (191); xiv. replacing said negative obstacle (191) in the cloud of points obtained during the scanning of step xiii) by a positive virtual obstacle whose contour resumes that of the negative obstacle (191); xv. Integrate said virtual obstacle (391) positive to the cloud of points obtained in step iv).
类似技术:
公开号 | 公开日 | 专利标题
CA2940824C|2020-09-15|Variable resolution light radar system
US10776639B2|2020-09-15|Detecting objects based on reflectivity fingerprints
US11226413B2|2022-01-18|Apparatus for acquiring 3-dimensional maps of a scene
US10884110B2|2021-01-05|Calibration of laser and vision sensors
Barry et al.2015|Pushbroom stereo for high-speed navigation in cluttered environments
JP2015535337A|2015-12-10|Laser scanner with dynamic adjustment of angular scan speed
Nieuwenhuisen et al.2013|Multimodal obstacle detection and collision avoidance for micro aerial vehicles
FR3039904B1|2019-06-14|DEVICE AND METHOD FOR DETECTING OBSTACLES ADAPTED TO A MOBILE ROBOT
US20190293765A1|2019-09-26|Multi-channel lidar sensor module
EP2824418A1|2015-01-14|Surround sensing system
US20210018611A1|2021-01-21|Object detection system and method
Khan et al.2017|Stereovision-based real-time obstacle detection scheme for unmanned ground vehicle with steering wheel drive mechanism
WO2015185749A1|2015-12-10|Device for detecting an obstacle by means of intersecting planes and detection method using such a device
Filisetti et al.2018|Developments and applications of underwater LiDAR systems in support of marine science
KR101888170B1|2018-08-13|Method and device for deleting noise in detecting obstacle by unmanned surface vessel
KR102068688B1|2020-01-21|location recognition system of multi-robot
EP3757943A1|2020-12-30|Method and device for passive telemetry by image processing and use of three-dimensional models
US20210080550A1|2021-03-18|Systems and Methods for Modifying LIDAR Field of View
Guerrero-Bañales et al.2021|Use of LiDAR for Negative Obstacle Detection: A Thorough Review
EP3227716A1|2017-10-11|Method of determining a trajectory mapping by passive pathway of a mobile source by a method of inverse triangulation
同族专利:
公开号 | 公开日
WO2017025521A1|2017-02-16|
EP3332299A1|2018-06-13|
FR3039904B1|2019-06-14|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
US6151539A|1997-11-03|2000-11-21|Volkswagen Ag|Autonomous vehicle arrangement and method for controlling an autonomous vehicle|
US20140136414A1|2006-03-17|2014-05-15|Raj Abhyanker|Autonomous neighborhood vehicle commerce network and community|
US20070280528A1|2006-06-02|2007-12-06|Carl Wellington|System and method for generating a terrain model for autonomous navigation in vegetation|
US20080027591A1|2006-07-14|2008-01-31|Scott Lenser|Method and system for controlling a remote vehicle|
US20120182392A1|2010-05-20|2012-07-19|Irobot Corporation|Mobile Human Interface Robot|
WO2014152254A2|2013-03-15|2014-09-25|Carnegie Robotics Llc|Methods, systems, and apparatus for multi-sensory stereo vision for robotics|WO2019101651A1|2017-11-27|2019-05-31|Robert Bosch Gmbh|Method and device for operating a mobile system|US9463574B2|2012-03-01|2016-10-11|Irobot Corporation|Mobile inspection robot|CN113126640A|2019-12-31|2021-07-16|北京三快在线科技有限公司|Obstacle detection method and device for unmanned aerial vehicle, unmanned aerial vehicle and storage medium|
CN111988524A|2020-08-21|2020-11-24|广东电网有限责任公司清远供电局|Unmanned aerial vehicle and camera collaborative obstacle avoidance method, server and storage medium|
法律状态:
2016-08-30| PLFP| Fee payment|Year of fee payment: 2 |
2017-02-10| PLSC| Search report ready|Effective date: 20170210 |
2017-08-31| PLFP| Fee payment|Year of fee payment: 3 |
2018-08-31| PLFP| Fee payment|Year of fee payment: 4 |
2019-08-29| PLFP| Fee payment|Year of fee payment: 5 |
2020-08-31| PLFP| Fee payment|Year of fee payment: 6 |
2021-06-14| PLFP| Fee payment|Year of fee payment: 7 |
优先权:
申请号 | 申请日 | 专利标题
FR1557615A|FR3039904B1|2015-08-07|2015-08-07|DEVICE AND METHOD FOR DETECTING OBSTACLES ADAPTED TO A MOBILE ROBOT|
FR1557615|2015-08-07|FR1557615A| FR3039904B1|2015-08-07|2015-08-07|DEVICE AND METHOD FOR DETECTING OBSTACLES ADAPTED TO A MOBILE ROBOT|
EP16750807.6A| EP3332299A1|2015-08-07|2016-08-08|Device and method for detecting obstacles suitable for a mobile robot|
PCT/EP2016/068916| WO2017025521A1|2015-08-07|2016-08-08|Device and method for detecting obstacles suitable for a mobile robot|
[返回顶部]