专利摘要:
The drone includes a camera (14), an inertial unit (46) measuring the angles of the drone, and an extractor module (52) delivering image data of a movable capture area of reduced size moved dynamically in a contrary direction of the angle variations measured by the inertial unit. The module analyzes the image data of the useful area to assign to each a weighting coefficient representative of a probability of belonging to the sky, and dynamically defines a segmentation boundary (F) of the useful area between sky and ground based on of these weighting coefficients. Two distinct groups of regions of interest ROI are defined, respectively for the sky zone and for the ground zone, and the dynamic exposure control means are controlled according to the image data of the ROIs of one of these groups, including excluding ROI from the sky zone.
公开号:FR3041135A1
申请号:FR1558421
申请日:2015-09-10
公开日:2017-03-17
发明作者:Benoit Pochon;Axel Balley;Henri Seydoux
申请人:Parrot SA;
IPC主号:
专利说明:

The invention relates to the processing of digital images captured by an on-board camera in a mobile device, in particular in a motorized flying machine such as a drone. The invention is advantageously applied to the images collected by the front camera of a rotary wing drone such as a quadrocopter. The AR.Drone 2.0 or the Bebop Drone of Parrot SA, Paris, France are typical examples of such quadricopters. They are equipped with a series of sensors (accelerometers, 3-axis gyrometers, altimeters), a frontal camera capturing an image of the scene towards which the drone is directed and a vertical aiming camera capturing an image of the terrain overflown. . They are provided with multiple rotors driven by respective engines controllable in a differentiated manner to control the drone attitude and speed. Various aspects of these drones are described in particular in WO 2010/061099 A2, EP 2 364 757 A1, EP2613213A1 or EP2613214A1 (Parrot SA).
The front-facing video camera is usable for driving in immersive mode, that is to say where the user uses the image of the camera in the same way as if he were himself It can also be used to capture sequences of images of a scene to which the drone is moving, allowing the user to use the drone in the same way as a camera or camcorder. Instead of being held in the hand, it would be carried by the drone.The images collected can be recorded and broadcast, posted on web sites hosting video sequences, sent to other Internet users, shared on video. social networks, etc.
These images are intended to be recorded and communicated, it is desirable that they have the least possible defects, including defects caused by the behavior of the drone which, in practice, introduces various artifacts inadvertent on the final image presented to the user.
These defects may be tolerable in an "immersive steering" configuration. On the other hand, if it is to use the drone in the manner of a mobile video camera to capture sequences that will be recorded and restored later, these defects are extremely troublesome, so it is desirable to reduce them to a minimum.
In the case of the aforementioned Bebop Drone, it implements a camera equipped with a fisheye type hemispherical field lens covering a field of about 180 °, but of which only a part of the captured field is used, this corresponding part approximately to the angular sector captured by a conventional camera.
To do this, a particular window (hereinafter "capture area") is selected in the overall hemispherical image formed on the surface of the sensor. This window is mobile in rotation and in translation, and permanently moved according to the movements of the drone determined by the inertial unit, and in the opposite direction of these movements. The image collected by the fisheye lens certainly undergoes the same movements of oscillation and rotation as a conventional camera, but the displacement of the image zone is enslaved so as to compensate these movements and thus produce a stabilized image at the same time. respect of the movements of the drone. The image of the capture zone, more exactly a useful part (hereinafter "useful zone") thereof is then subjected to reprojection processing to compensate for the geometrical distortions introduced by the fisheye lens: straightening straight lines bent by the lens, restoring uniform magnification between the center and the periphery of the image, etc. The final image obtained ("rectified pay zone") is then transmitted to the user for on-screen viewing, recording, etc. We thus have a "virtual camera" by extracting from the captured total scene a particular zone (the capture zone) which is dynamically displaced, in rotation and in translation, in the initial image in the opposite direction to the movements of the drone in order to to annihilate the oscillations that would otherwise be observed on the final image presented to the user, then applying an image rectification process to deliver a representation of the scene devoid of geometric distortions and the like.
This technique is described in application FR 14 56302 of July 2, 2014 for "rotary-wing drone equipped with a video camera delivering stabilized image sequences", in the name of the Applicant.
The present invention aims at the elimination of a particular problem which appears during certain evolutions of the drone.
This problem concerns the control of the self-exposure algorithm (AE), which analyzes the brightness of different regions of the image and automatically chooses for the sensor a couple {exposure time, gain} allowing to capture any scene with the same target brightness. This choice is made from an analysis of a version of the reduced definition image (for example 64 x 48 pixels), hereinafter "imagette", obtained by sub-sampling or decimation, and from which are extracted brightness histograms as well as possibly other parameters, these different starting data being referred to hereinafter by the general term "statistics" of the image.
In the aforementioned case of a capture zone extracted from the overall image collected by the sensor, it is the content of the image from the capture zone that produces the statistics.
But, as explained above, this capture area is larger than the final useful area that will be presented to the user, so that the self-exposure algorithm can make decisions based on elements of the scene that the user does not see, that is to say elements located inside the capture zone but outside the useful area.
However, the scene that is to be well exposed is that which is seen by the user (the useful area), and not the capture area, which differs from the latter.
In the typical case of an image comprising a part of sky and a part of ground, the proportion between sky and ground will vary according to the inclination of the camera, which itself varies permanently according to the attitude of the drone. In fact, if the drone moves from a hovering attitude to an inclined, plunging attitude (this tilting produces a linear displacement forward), then the camera, lowered to the ground (because it is linked to the body of the drone ) will capture a much larger proportion of soil. As the ground is darker, servoing the self-exposure algorithm will tend to compensate for this variation in brightness by an increase in exposure time and / or gain.
However, because of the displacement of the capture area in the initial image and the reprojection made to extract the useful area, the user will always see the same scene. But this scene has been temporarily overexposed due to the corrective action of self-exposure, overexposure that will disappear when the drone returns to its initial attitude - and without the contours of the image seen by the user have exchange.
In addition, the limited dynamic range of the sensor does not allow to correctly expose both the sky and the ground in all situations.
HDR (high dynamic range) type imaging techniques are known which consist of fusing shots made at very low interval sequences with different exposure settings, each shot having its own dynamic range. . But in the particular case of a camera embedded on a drone moving in flight, this technique is not usable because it would introduce important temporal artefacts due to the rapid movements of the drone (and therefore the camera) and vibrations to which he is subject.
The aim of the invention is to propose a technique that allows, dynamically and in real time, to determine which part of the image of the scene captured by the camera corresponds to the sky, and which part corresponds to the ground, or else to determine that it is not possible to delimit between sky and ground (as in the typical case of a flying drone indoors).
After this determination, the image will then be segmented into a sky zone and a ground zone so as to be able to treat the sky zone and the ground zone of the thumbnail differently, generally by choosing to expose only (or mainly) for the soil zone. The invention proposes for this purpose, essentially, to calculate for each pixel or macropixel (hereinafter "image data") a weighting coefficient representative of a probability of membership of this pixel or macropixel in the sky (or, conversely, on the ground) so as to be able to: - expose for the zone with the most details (generally the ground, rarely the sky), and - expose evenly even if the sky / soil ratio varies a lot and quickly (typically during abrupt changes in the attitude of the drone) and offsetting the distortions introduced by the lens of the fisheye lens.
More precisely, the invention proposes a drone comprising, in a manner known in itself: a camera linked to the body of the drone, comprising a hemispherical fisheye-type objective pointing in a fixed direction relative to the body of the drone, a digital sensor collecting the image formed by the lens, and means for automatically controlling the exposure of the sensor; an inertial unit capable of measuring the Euler angles characterizing the instantaneous attitude of the drone with respect to an absolute terrestrial reference and delivering at the output data of current attitude of the drone; extracting means capable of defining, in said image formed by the objective on the extent of the sensor, the position of a reduced-size capture zone; servo-control means, receiving as input the current attitude data of the drone and able to dynamically modify the position and the orientation of the capture zone in said image in a direction opposite to that of the changes of the values of the measured angles by the inertial unit; - Reprojection means, receiving as input image data, formed of pixels or macropixels, a useful area extracted from the capture area and outputting corresponding corrected image data, compensated geometric distortions introduced by the fisheye lens; and analysis means, able to define a plurality of ROI regions of interest distributed in the useful area, and to control the dynamic exposure control means as a function of the respective image data contained in a picture taken from the useful area, in each ROI.
In a characteristic manner of the invention, the analysis means are furthermore able to: analyze said current image data of the useful zone so as to attribute to each of these data a weighting coefficient representative of a probability of belonging to heaven; dynamically defining at each image a segmentation boundary of the useful zone between a sky zone and a ground zone, as a function of the weighting coefficients assigned to each image datum; - define two distinct groups of ROI, respectively for the sky zone and for the soil zone; and - controlling the dynamic exposure control means according to the respective image data contained in the image from the useful area, in ROIs of one of said selected groups.
The weighting coefficient representative of a probability of belonging to the sky may be a function coefficient: of the position of the current image data analyzed by the analysis means with respect to a theoretical horizon direction delivered by the central unit inertial; and / or image data of the neighborhood of the current image data analyzed by the analysis means; and / or the altitude of the drone relative to the ground, the drone comprising means capable of delivering an estimated value of this altitude; and / or - the illuminant of the scene captured by the camera, the camera comprising means for analyzing this illuminant; and / or - the inclination of the drone in pitch relative to the horizontal.
The ROIs can be ROIs whose size and / or positioning are dynamically defined at each image in the useful area, or ROIs defined by an ROI grid distributed uniformly and in a predetermined manner in the useful area. 0
An embodiment of the present invention will now be described with reference to the appended drawings in which the same references designate identical or functionally similar elements from one figure to another.
Figure 1 is an overview showing the drone and associated remote control device for remote control.
Figures 2a and 2b illustrate changes in camera sighting directions driven by a forward tilting of the drone, for example during an acceleration phase.
FIG. 3 illustrates in (a1) - (a3) the successive steps of windowing and correction of the distortions applied to the image formed on the sensor of the camera of the drone so as to produce a rectified image, and in (b1) - (b3) the corresponding steps when the attitude of the drone changes, with the camera lowered to the ground.
FIG. 4 illustrates in (a) and (b) the luminosity histograms respectively obtained by analysis of the capture zones (a2) and (b2) of FIG. 3.
Figure 5 illustrates in block diagram form the various modules involved in the implementation of the invention.
Figure 6 is a flowchart explaining the main successive steps of implementation of the invention.
Figure 7 illustrates more precisely the different analysis steps to arrive at a weighting of each pixel of the image.
Figure 8 illustrates the analysis of the image according to a first variant, implementing automatically defined regions of interest within this capture zone.
Figure 9 illustrates the analysis of the image according to a second variant, implementing a grid of this capture area. 0
Exemplary embodiments of the present invention will now be described.
In Figure 1, reference numeral 10 generally denotes a drone, which is for example a quadricopter such as the Bebop Drone model of Par-rot SA, Paris, France. This drone comprises four coplanar rotors 12 whose engines are controlled independently by an integrated navigation system and attitude control. It is provided with a front-facing camera 14 for obtaining an image of the scene towards which the drone is oriented.
The drone also includes a vertical aiming camera (not shown) pointing downwards, capable of capturing successive images of the terrain overflown and used in particular to evaluate the speed of the drone relative to the ground. Inertial sensors (accelerometers and gyrometers) make it possible to measure with a certain accuracy the angular velocities and the angles of attitude of the drone, that is to say the angles of Euler (pitch φ, roll Θ and yaw ψ) describing the inclination of the drone with respect to a horizontal plane of a fixed terrestrial reference. An ultrasonic range finder disposed under the drone also provides a measurement of the altitude relative to the ground.
The drone 10 is controlled by a remote remote control device 16 provided with a touch screen 18 displaying the image embedded by the front camera 14, with a number of symbols superimposed enabling the activation of control commands by simple contact of the The device 16 is provided with radio link means with the drone, for example of the Wi-Fi local network type (IEEE 802.11), for the bidirectional exchange of data of the device. drone 10 to the apparatus 16, in particular for transmitting the image picked up by the camera 14, and the apparatus 16 to the drone 10 for sending pilot commands. The remote control device 16 is also provided with inclination sensors making it possible to control the attitude of the drone by printing the apparatus with corresponding inclinations along axes of roll and pitch, it being understood that the two longitudinal and transverse components of the horizontal velocity of the drone 10 will be intimately related to the inclination along the two respective axes of pitch and roll. The piloting of the drone consists in making it evolve by: a) rotating about a pitch axis 22, to move it forward or backward; b) rotation about a roll axis 24, to shift it to the right or to the left; c) rotation about a yaw axis 26, to pivot to the right or to the left the main axis of the drone; and (d) downward or upward translation by changing the speed of the gasses to respectively reduce or increase the altitude of the drone.
When these control commands are applied by the user from the remote control device 16, the commands a) and b) pivoting about the pitch axes 22 and the rolling axis 24 are obtained by inclinations of the apparatus 16 respectively about its longitudinal axis 28 and its transverse axis 30: for example, to advance the drone simply tilt the remote control device 16 forward by leaning it around the axis 28, to deport on the right, it suffices to tilt the remote control device 16 by leaning it about the axis 30 to the right, and so on. The commands c) and d), as for them, result from actions applied by contact of the user's finger 20 with corresponding specific areas of the touch screen 18.
The drone also has an automatic and autonomous stabilization hovering system, activated in particular as soon as the user removes his finger from the touch screen of the aircraft, or automatically at the end of the takeoff phase, or in case interruption of the radio link between the aircraft and the drone.
Figure 2a schematically illustrates, in profile, the attitude of the drone when it is stationary, in a state of levitation.
The field covered by a front camera 14 of conventional type has been schematized at 36, for example a camera covering a field of 54 ° and whose axis of vision δ is centered on the horizon.
If, as shown in Figure 2b, the drone progresses forward with a nonzero horizontal speed, by design the axis 26 of the drone will be inclined forward by an angle φ (pitch angle) relative to the vertical V. This inclination towards the front, schematized by the arrow 38, implies an inclination of the same value, shown schematically by the arrow 40, of the axis δ of the camera with respect to the plane of the horizon HZ. We understand as well as the developments of the drone, its acceleration, slowdowns, etc. the δ axis oscillates permanently around the direction of the horizon HZ, which will be reflected in the image by permanent oscillation movements up and down.
Similarly, if the drone shifts to the right or to the left, this movement will be accompanied by a pivoting around the roll axis 24, which will be reflected in the image by rotations in one direction or in one direction. the other of the scene captured by the camera.
In order to overcome these drawbacks, it has been proposed, as explained in the above-mentioned application FR 14 56302, to equip the camera with a fisheye-type hemispherical field lens covering a field of about 180 °, schematized at 42 in FIG. 2a. The image captured by the camera equipped with this fisheye lens will certainly undergo the same movements of oscillation and rotation as a conventional camera but we will use only part of the field captured by this camera by selecting a particular window corresponding to the angular sector 36 captured by a conventional camera, and which will be dynamically moved in the hemispherical image in the opposite direction of the movements of the drone as determined by the inertial unit, in order to annihilate the oscillations that otherwise would be observed on the image.
Thus, in the case illustrated in FIG. 2b, where the drone plunges downward from a pitch angle φ (arrow 38) with respect to the vertical V, the capture window will be moved upwards (arrow 44) by an angle of the same value, thereby reducing in the direction of the horizon HZ the central axis of the sector 36 of a "virtual camera" covering the field corresponding to the image area.
Figure 3 shows in (a1) an example of a scene, as collected by the sensor of the video camera equipped with the fisheye lens.
As can be seen, the image I of this scene has very strong geometric distortions, inherent to the hemispherical or quasi-hemispherical coverage of the fisheye lens, rectified on the flat surface of the sensor.
Only a portion of this image I produced by the fisheye lens will be used. This part is determined according to i) the direction in which the "virtual camera" is pointed, ii) its field of view (shown at 36 in Figures 2a and 2b) and iii) its ratio width / height. Thus, a "capture zone" ZC containing raw pixel data including the "raw useful zone" ZUb which corresponds to the field of the "virtual camera" after compensation for geometric distortions introduced by the fisheye lens is defined.
Note that it is not necessary to capture all the pixels of the image I formed on the sensor, but only a fraction thereof, corresponding to the capture area ZC, for example a window ZC of approximately 2 Mpixel extracted from an I image in HD quality (1920 x 1080 pixels) produced by a sensor whose resolution will typically be 14 Mpixel (4608 x 3288 pixels). Therefore, only the pixel data of the capture zone ZC actually needed is transferred which can then be refreshed at a rate of 30 frames / second without any particular difficulty. It is thus possible to choose a high resolution sensor while maintaining a high image rate.
The views (a2) and (a3) of Figure 3 illustrate the processings performed on the pixel data of the capture area ZC to arrive at the final compensated image of the geometric distortions: from the pixel data transferred from the area ZC capture (view (a2)) an algorithm extracts the pixel data of the gross useful area ZUb and applies a mesh of triangles (technique in itself known) and then straightens the image by stretching each triangle, to give a rectified useful image ZUr (view (a3)) with rectified pixel data. The strongly curved horizontal lines of the fisheye image will be corrected to make them straight and produce an image corresponding to a natural vision, devoid of geometric distortions.
The views (b1) - (b3) of Figure 3 are homologous to the views (a1) - (a3), in a configuration where the attitude of the drone is changed due to a tilt of the latter forward (rotation around its axis of pitching), consecutive inclination for example to a transition from an immobile configuration, hovering, to a configuration of forward progression (the linear speed of the drone being even greater than its tilt is strong).
As illustrated in (b1), to compensate for this downward inclination of the drone, the capture zone ZC is displaced towards the top of the image, thus in a direction opposite to that of the inclination of the drone. If the relative position of the gross useful area ZUb remains substantially the same within the catch area ZC (to allow the continuation of the target scene), the catch area will now include a much larger share of soil S than sky C: if we compare the views (a2) and (b2), we see that in the initial configuration (view (a2)) the sky / ground ratio is about 50/50%, while in the modified configuration ((view (b2)) the sky / soil ratio is about 25/75%, and if it is strongly shifted up the capture area may include X areas that are located outside the region of the circular image formed by the fisheye lens on the sensor.
On the other hand, the final image ZUr of the rectified useful zone (view (b3)) will be substantially identical to what it was (view (a3)) before the drone has tilted forwards.
FIG. 4 illustrates the histograms obtained by analyzing the brightness of the pixels of the thumbnail resulting from the capture zone ZC, respectively in (a) in the case of the view (a2) and in (b) in the case of the view (b2). As can be seen in this figure, the dropping of the drone towards the front results in a significant change in brightness histogram, with a shift to the left of the average value M due to the increase in the ground ratio. The self-exposure algorithm will interpret this change in the average value M as a darkening of the image, which will be automatically compensated by an increase in exposure time and / or camera sensitivity.
Therefore, the final images (a3) and (b3) respectively obtained (images of the upright area ZUr), if they present the user the same frame of the scene, differ by their exposure setting, the image of the view (b3) being lighter than that of the view (a3) because of the greater proportion of the ground in the area of capture of the view (b2).
Figure 5 illustrates in block diagram form the various modules involved in the implementation of the invention.
The front camera 14 of the drone delivers a raw image signal corresponding to the image I. This camera, mechanically linked to the body of the drone, is subject to angular displacements which are measured by an inertial unit (IMU) 12 linked to the body of drone and so to the camera. The rotations of the camera are given by the pitch angle φ, the roll angle <9 and the yaw angle ψ describing the inclination of the drone in all three dimensions with respect to a fixed landmark (angles of Euler). These data are applied to an angle prediction module 48 driving a module for calculating the position of the capture zone ZC in the image I. A video processing module 52 receives as input the raw image signal I and ensures various windowing operations as a function of the position of the capture zone ZC calculated by the module 50, image stabilization, extraction and recovery of the useful area, to output to the user a signal d ZUr useful image for transmission to the user, possible display and recording.
The module 52 also provides control (diagrammatically by the return 54) of the operating parameters of the camera, in particular the control of the self-exposure (AE). On the other hand, for the implementation of the present invention, the inertial unit 46 is also able to deliver to the module 52 an estimate of the direction of the horizon (HZ in FIG. 2b), or "theoretical horizon", corresponding to the orientation, relative to the body of the drone, the plane of the terrestrial reference relative to which are measured pitch angles φ and roll <9 characterizing the instantaneous attitude of the drone.
Finally, the drone is provided with altitude determination means 54, for example means as described in EP 2 644 240 A1 (Parrot), operating according to the data produced by an ultrasonic telemetry sensor and a barometric sensor, applied to a Kalman filter for delivering an estimated instantaneous absolute altitude value of the drone in a terrestrial reference.
Figure 6 is a flowchart explaining the main successive steps of implementation of the invention.
This flow chart 100 comprises an initial step (block 102) of collection of the raw image I by the camera, followed by a step (block 104) of extraction in this raw image of the capture zone ZC as a function of the data of the image. attitude of the drone delivered by the IMU inertial unit, and estimating the outline of the useful area ZU corresponding to the image of the scene to be delivered to the user. The following step (block 106), characteristic of the invention, consists of a pixel-by-pixel analysis (or macropixel by macropixel) of the image data of the useful area ZU, in the manner that will be explained in more detail. The result of this analysis is the allocation to each pixel (or macropixel) of a weighting coefficient reflecting a greater or lesser probability of membership of the pixel (or macropixel) to a region of the sky. of the image, according to several parameters such as direction of the horizon, altitude of the drone, content of pixels (or macropixels) neighbors, instantaneous inclination of the drone, etc., and the determination of a segmentation boundary between sky and soil. The next step (block 108) consists in defining (according to techniques that are themselves known, which will not be described in more detail) a plurality of regions of interest, ROI, which are geometric selections of reduced-size zones. in the image resulting from the image to be analyzed. A brightness histogram is established for each of these ROIs from the thumbnail statistics.
In a characteristic manner of the invention, the ROIs used by the selfexposure algorithm are selected or weighted according to their position with respect to the boundary between sky and ground determined in the previous step 106. The final result is a list limited ROIs that will be used by the self-exposure algorithm for adjusting the exposure level of the sensor by analyzing and comparing the histograms of these different ROIs, according to techniques known per se. Typically, the selected ROIs will be those corresponding to the soil zone below the segmentation boundary, as long as this area contains the most detail and is the one for which it is desired optimize the exposure (alternatively, one could choose to expose for the sky rather than for the ground, or to combine in variable proportions the parameters of analysis of the ROIs in zone of ground and in zone of sky).
FIG. 7 more precisely illustrates the manner in which pixel-by-pixel analysis of block 106 is performed, for assigning each pixel located within the limits of useful area ZU a weighting coefficient specific to its probability of belonging to an sky area of the captured image. The flow chart 200 includes a first step (test 202) of determining whether the drone is inside or outside (because it is only in the latter case that it will be necessary to search for a probability of belonging to the sky). This determination can be made in particular by an analysis of the illuminant of the scene, which is a parameter available at the output of the automatic white balance algorithm (AWB) implemented by the camera: thus, a D50 or D65 illuminant typically corresponds to natural light in daylight, indicating that the drone is very likely to be outdoors. If it is determined in test 202 that the drone is moving indoors, then a zero weight G (i, j) = 0 is assigned to the pixel (i, j), because in this case its probability of belonging to a sky zone is zero (block 204). In the opposite case, it is then determined whether, as a function of its position in the image, the pixel is situated above or below the theoretical horizon as determined by the IMU inertial unit (test 206).
If this pixel is located below the horizon, a zero weight G (i, j) = 0 is assigned to it (block 204). In the opposite case, the weight G (ij) is set to 1 (block 208) to indicate a probability - at this stage - of sky membership of 100%.
This probability must however be weighted, because even if the pixel corresponds to an area outside and situated above the horizon, it can be in a region comparable to a soil zone, for example if it belongs to the image of a building, a tree, etc., located above the horizon: these are areas with details, which should be well exposed, and which do not in any case part of the sky although located above the horizon.
To do this, a first weighting is applied (block 210) according to the altitude. This weighting is based on the observation that the more a drone flies high, the more likely it is that the horizon is clear, that is to say that the part of the image situated above the horizon horizon is not filled with images of buildings, trees, hills, etc. We then weight the weight G (i, j) = 1 by a function w (z) varying monotonously (linear, in steps or any other appropriate progression) as a function of the altitude z, with a value between w ( 0) = 0 and w (zmax) = 1, Zmax being a maximum value of altitude, for example zmax = 300 meters. The result is a weighting coefficient value G '(i, j) = G (i, j) x w (z).
A second weighting w (n) can be applied (block 212) to the value G '(ij) obtained at block 210, in particular as a function of the neighborhood analysis of the pixel (i, j), ie to say an analysis (according to known techniques) of variance, gradient, etc. relative to the pixels of the image surrounding the pixel (i, j) subject of the weighting. Still other weightings may be applied, for example depending on the attitude of the drone, in particular its pitch angle φ given by the IMU inertial unit: if this data indicates that the drone is, at the moment of the analysis, strongly leaning forward, the probability of membership of a pixel to a sky zone is less than if the drone is hovering (horizontal attitude), or a fortiori in a pitched attitude during a phase strong deceleration or reverse. The analysis of the flowchart 200 is performed for each pixel (or macropixel) of the image of the useful area, which makes it possible to obtain at the end of the analysis a map of the weights of each pixel of this image. , reflecting the probability of belonging to the sky of the pixel considered. The self-exposure can then be implemented taking into account this weighting, for example (but not limited to) by controlling self-exposure according to the image data exclusively or mainly from the sky zone.
As indicated above, the self-exposure algorithm uses a plurality of regions of interest ROIs distributed in the image.
In a first implementation variant, illustrated in FIG. 8, the self-exposure operates from ROIs whose dimension and / or positioning are dynamically defined i) inside the image taken from the useful zone ZUb and ii) typically of the invention, below the segmentation boundary F which separates the sky zone (above the F border) and the soil zone (below the F border line). .
The ROIs are distributed so as to be totally or partially within the useful zone ZUb, that is to say that if the algorithm for defining the ROIs generates ROIs outside the useful zone ZUb these will be excluded from further analysis for the control of self-exposure. In addition, compared to the F boundary, if the ROI generation algorithm defines for example the nine ROH ... ROI9 regions shown in Figure 8, then the self-exposure control will only operate based on the ROIs located in the ground zone, below the border F, namely ROI referenced ROb, ROU, ROI6, ROI7 and ROIe in this example. The exposure will be on the parts of the image with the most details, which are generally those that interest the user and for which it is appropriate to optimize the exposure.
FIG. 9 illustrates a second implementation variant, in which the ROIs are no longer dynamically defined and with a variable size, but in the form of a regular grid of the thumbnail resulting from the capture zone ZC, with a GR grid of which each square or elementary rectangle ROI (i, j) will be used or not for the control of self-exposure according to whether it is above or below the boundary F between sky zone and zone of soil.
权利要求:
Claims (8)
[1" id="c-fr-0001]
A drone (10) comprising: a camera (14) connected to the body of the drone, comprising: a fisheye-type hemispherical field lens pointing in a fixed direction relative to the body of the drone; a digital sensor collecting the image (I) formed by the lens; and means for automatically controlling the exposure of the sensor; an inertial unit (16) capable of measuring the Euler angles (φ, θ, ψ) characterizing the instantaneous attitude of the drone with respect to an absolute terrestrial reference and delivering at the output data of current attitude of the drone; extracting means (52) capable of defining, in said image (I) formed by the objective on the extent of the sensor, the position of a reduced-size capture zone (ZC); servo-control means (48, 50, 52), receiving as input the current attitude data of the drone and able to dynamically modify the position and the orientation of the capture zone (ZC) in said image (I) in a direction contrary to that of the changes of the values of the angles measured by the inertial unit; - reprojection means (52), receiving as input image data, formed of pixels or macropixels, a useful area (ZUb) extracted from the capture area (ZC) and outputting data of corresponding corrected image (ZUr), compensated for geometric distortions introduced by the fisheye lens; and analysis means, able to define a plurality of regions of interest ROI (ROh ... ROI9; ROI (i, j)) distributed in the useful zone, and to control the dynamic exposure control means according to the respective image data contained in a picture taken from the useful area, in each ROI, characterized in that the analysis means are furthermore capable of: analyzing said current image data of the useful area of in order to attribute to each of these data a weighting coefficient representative of a probability of belonging to the sky; dynamically defining at each image a segmentation boundary (F) of the useful zone between a sky zone and a ground zone, as a function of the weighting coefficients assigned to each image datum; - define two distinct groups of ROI, respectively for the sky zone and for the soil zone; and - controlling the dynamic exposure control means according to the respective image data contained in the image from the useful area, in ROIs of one of said selected groups.
[2" id="c-fr-0002]
2. The drone of claim 1, wherein: said weighting coefficient representative of a probability of belonging to the sky is a coefficient depending on the position of the current image data analyzed by the analysis means, relative to to a theoretical horizon direction (HZ) delivered by the inertial unit.
[3" id="c-fr-0003]
3. The drone of claim 1, wherein: said weighting coefficient representative of a probability of belonging to the sky is a coefficient dependent on the image data of the neighborhood (n) of the current image data analyzed by the means of analysis.
[4" id="c-fr-0004]
4. The drone of claim 1, wherein: the drone comprises altitude determination means (54), able to deliver an estimated value of the altitude (z) of the drone relative to the ground, and - said weighting coefficient representative of a probability of belonging to the sky is a coefficient function of said altitude.
[5" id="c-fr-0005]
5. The drone of claim 1, wherein: the camera comprises means for analyzing the illuminant of the scene captured by the camera, and said weighting coefficient representative of a probability of belonging to the sky is a coefficient function of said illuminant.
[6" id="c-fr-0006]
6. The drone of claim 1, wherein: said weighting coefficient representative of a probability of belonging to the sky is a coefficient function of the inclination of the drone in pitch relative to the horizontal.
[7" id="c-fr-0007]
7. The drone of claim 1, wherein the ROI are ROI (ROl · ... ROI9) whose size and / or positioning are dynamically defined at each image in the useful area.
[8" id="c-fr-0008]
8. The drone of claim 1, wherein the ROI are ROI (ROI (ij)) defined by a grid (GR) of ROI distributed uniformly and in a predetermined manner in the useful area.
类似技术:
公开号 | 公开日 | 专利标题
EP3142354B1|2018-03-14|Drone with forward-looking camera with segmentation of the image of the sky in order to control the autoexposure
EP3142353B1|2019-12-18|Drone with forward-looking camera in which the control parameters, especially autoexposure, are made independent of the attitude
EP3048789B1|2016-12-28|Drone provided with a video camera and means to compensate for the artefacts produced at the greatest roll angles
EP2933775B1|2016-12-28|Rotary-wing drone provided with a video camera supplying stabilised image sequences
EP3078402B1|2017-10-04|System for piloting an fpv drone
EP3171589B1|2018-06-06|Drone equipped with a video camera sending sequences of images corrected for the wobble effect
EP3316068B1|2019-03-06|Autonomous system for collecting moving images by a drone with target tracking and upkeep of the target collection angle
FR3028767A1|2016-05-27|VIDEO SYSTEM FOR DRIVING A DRONE IN IMMERSIVE MODE
EP3113103A1|2017-01-04|High-resolution camera unit for drone, with correction of wobble distortion
EP3112803A1|2017-01-04|Camera unit capable of being installed in a drone for mapping a terrain and method for managing the collection of images by a camera unit
EP3142356A1|2017-03-15|Method for determining an exposure time of a camera mounted on a drone, and associated drone
EP3273318B1|2021-07-14|Autonomous system for collecting moving images by a drone with target tracking and improved target positioning
US10949978B2|2021-03-16|Automatic background replacement for single-image and multi-view captures
EP3306428A1|2018-04-11|Autonomous system for collecting moving images by a drone with target tracking and upkeep of the target collection angle
FR3027143A1|2016-04-15|MOBILE APPARATUS, IN PARTICULAR ROTATING SAIL DRONE, PROVIDED WITH A VIDEO CAMERA DELIVERING IMAGE SEQUENCES DYNAMICALLY CORRECTED BY THE &#34;WOBBLE&#34; EFFECT
EP3217658A1|2017-09-13|Method for encoding and decoding video of a drone, and devices thereof
EP3273317A1|2018-01-24|Autonomous system for taking moving images, comprising a drone and a ground station, and associated method
FR3055078A1|2018-02-16|IMAGE CAPTURE METHOD |, COMPUTER PROGRAM, AND ELECTRONIC CAPTURE SYSTEM OF ASSOCIATED VIDEO
EP3392728A1|2018-10-24|Method for piloting a rotary wing drone, related computer program, electronic apparatus and drone
FR3020168A1|2015-10-23|ROTATING WING DRONE WITH VIDEO CAMERA DELIVERING STABILIZED IMAGE SEQUENCES
FR3079943A1|2019-10-11|ELECTRONIC DEVICE AND METHOD FOR CONTROLLING A DRONE WITH TRAVELING COMPENSATION EFFECT, ASSOCIATED COMPUTER PROGRAM
FR2863750A1|2005-06-17|Images stabilizing method for e.g. smear camera, involves subjecting each line or column to inverse rotation or translation for stabilizing with respect to adjacent line or column, according to rotational and translatory movement of block
同族专利:
公开号 | 公开日
EP3142354A1|2017-03-15|
US20170078552A1|2017-03-16|
EP3142354B1|2018-03-14|
FR3041135B1|2017-09-29|
JP2017085550A|2017-05-18|
US10171746B2|2019-01-01|
CN106534709A|2017-03-22|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
JP2003267295A|2002-03-14|2003-09-25|Foundation For Nara Institute Of Science & Technology|Remote operation system|
US20120307042A1|2011-06-02|2012-12-06|Hon Hai Precision Industry Co., Ltd.|System and method for controlling unmanned aerial vehicle|
US20140176722A1|2012-12-25|2014-06-26|Casio Computer Co., Ltd.|Imaging device, imaging control method and storage medium|
DE102004017730B4|2004-04-10|2006-05-24|Christian-Albrechts-Universität Zu Kiel|Method for rotational compensation of spherical images|
JP2007304407A|2006-05-12|2007-11-22|Alpine Electronics Inc|Automatic exposure device and method for vehicle-mounted camera|
FR2938774A1|2008-11-27|2010-05-28|Parrot|DEVICE FOR CONTROLLING A DRONE|
FR2957266B1|2010-03-11|2012-04-20|Parrot|METHOD AND APPARATUS FOR REMOTE CONTROL OF A DRONE, IN PARTICULAR A ROTATING SAIL DRONE.|
FR2985329B1|2012-01-04|2015-01-30|Parrot|METHOD FOR INTUITIVE CONTROL OF A DRONE USING A REMOTE CONTROL APPARATUS|
FR2985581B1|2012-01-05|2014-11-28|Parrot|METHOD FOR CONTROLLING A ROTARY SAILING DRONE FOR OPERATING A SHOOTING VIEW BY AN ON-BOARD CAMERA WITH MINIMIZATION OF DISTURBING MOVEMENTS|
FR2988618B1|2012-03-30|2014-05-09|Parrot|ALTITUDE ESTIMER FOR MULTI-ROTOR ROTOR SAIL DRONE|
FR3020169A1|2014-04-16|2015-10-23|Parrot|ROTATING WING DRONE WITH VIDEO CAMERA DELIVERING STABILIZED IMAGE SEQUENCES|
US9635276B2|2015-06-10|2017-04-25|Microsoft Technology Licensing, Llc|Determination of exposure time for an image frame|FR3038482B1|2015-06-30|2017-08-11|Parrot|CAMERA BLOCK CAPABLE OF INBOARDING A DRONE FOR MAPPING A FIELD AND METHOD OF MANAGING IMAGE CAPTURE BY A CAMERA BLOCK|
USD814970S1|2016-02-22|2018-04-10|SZ DJI Technology Co., Ltd.|Aerial vehicle|
US10538326B1|2016-08-31|2020-01-21|Amazon Technologies, Inc.|Flare detection and avoidance in stereo vision systems|
JP6384000B1|2017-05-24|2018-09-05|エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co.,Ltd|Control device, imaging device, imaging system, moving object, control method, and program|
CN110679140B|2017-06-08|2021-04-13|富士胶片株式会社|Image pickup apparatus, control method of image pickup apparatus, and storage medium|
US20200148348A1|2017-06-13|2020-05-14|PearTrack Security Systems, Inc.|Tethered Drone System|
CN107340711A|2017-06-23|2017-11-10|中国人民解放军陆军军官学院|A kind of minute vehicle attitude angle automatic testing method based on video image|
CN109496424B|2017-09-30|2021-06-18|深圳市大疆创新科技有限公司|Light supplement control method, light supplement control module and unmanned aerial vehicle|
US10921763B1|2017-10-25|2021-02-16|Alarm.Com Incorporated|Baby monitoring using a home monitoring system|
JP6496955B1|2017-12-19|2019-04-10|エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co.,Ltd|Control device, system, control method, and program|
USD856848S1|2018-01-05|2019-08-20|SZ DJI Technology Co., Ltd.|Aerial vehicle|
USD861573S1|2018-01-19|2019-10-01|SZ DJI Technology Co., Ltd.|Aerial vehicle|
USD908588S1|2018-06-26|2021-01-26|SZ DJI Technology Co., Ltd.|Aerial vehicle|
CN110708458B|2018-07-10|2021-03-23|杭州海康微影传感科技有限公司|Image frame compensation method, camera and thermal imaging camera|
WO2021097848A1|2019-11-22|2021-05-27|深圳市大疆创新科技有限公司|Image processing method, image collection apparatus, movable platform and storage medium|
CN111504419B|2020-06-04|2021-05-11|浙江大学|Device and method for measuring liquid medicine amount of plant protection unmanned aerial vehicle|
法律状态:
2016-09-20| PLFP| Fee payment|Year of fee payment: 2 |
2017-03-17| PLSC| Publication of the preliminary search report|Effective date: 20170317 |
2017-07-07| TP| Transmission of property|Owner name: PARROT DRONES, FR Effective date: 20170606 |
2017-09-08| PLFP| Fee payment|Year of fee payment: 3 |
2018-09-19| PLFP| Fee payment|Year of fee payment: 4 |
2020-10-16| ST| Notification of lapse|Effective date: 20200910 |
优先权:
申请号 | 申请日 | 专利标题
FR1558421A|FR3041135B1|2015-09-10|2015-09-10|DRONE WITH FRONTAL CAMERA WITH SEGMENTATION OF IMAGE OF THE SKY FOR THE CONTROL OF AUTOEXPOSITION|FR1558421A| FR3041135B1|2015-09-10|2015-09-10|DRONE WITH FRONTAL CAMERA WITH SEGMENTATION OF IMAGE OF THE SKY FOR THE CONTROL OF AUTOEXPOSITION|
US15/256,404| US10171746B2|2015-09-10|2016-09-02|Drone with a front-view camera with segmentation of the sky image for auto-exposure control|
JP2016175136A| JP2017085550A|2015-09-10|2016-09-08|Drone with forward looking camera using segment of empty image for automatic exposure control|
EP16188018.2A| EP3142354B1|2015-09-10|2016-09-09|Drone with forward-looking camera with segmentation of the image of the sky in order to control the autoexposure|
CN201610816428.2A| CN106534709A|2015-09-10|2016-09-09|Drone with a front-view camera with segmentation of the sky image for auto-exposure control|
[返回顶部]