专利摘要:
The drone comprises a camera (14), an inertial unit (46) measuring the angles of the drone, and an extractor module (52) delivering image data of a movable capture area of reduced size moved dynamically in a direction opposite to that angle variations measured by the inertial unit. Compensating means (52) receives as input the current attitude data of the drone and dynamically acts on the current value (54) of a shooting parameter such as self-exposure, white balance or autofocus, calculated according to the image data contained in the capture area.
公开号:FR3041134A1
申请号:FR1558420
申请日:2015-09-10
公开日:2017-03-17
发明作者:Axel Balley;Benoit Pochon
申请人:Parrot SA;
IPC主号:
专利说明:

The invention relates to the processing of digital images captured by an on-board camera in a mobile device, in particular in a motorized flying machine such as a drone. The invention is advantageously applied to the images collected by the front camera of a rotary wing drone such as a quadrocopter. The AR.Drone 2.0 or the Bebop Drone of Parrot SA, Paris, France are typical examples of such quadricopters. They are equipped with a series of sensors (accelerometers, 3-axis gyrometers, altimeters), a frontal camera capturing an image of the scene towards which the drone is directed and a vertical aiming camera capturing an image of the terrain overflown. . They are provided with multiple rotors driven by respective engines controllable in a differentiated manner to control the drone attitude and speed. Various aspects of these drones are described in particular in WO 2010/061099 A2, EP 2 364 757 A1, EP2613213A1 or EP 2 613 214 A1 (Parrot SA).
The front-facing video camera can be used for "imrner-sif mode" control, that is to say where the user uses the camera image in the same way as if he were himself. It can also be used to capture sequences of images of a scene to which the drone is moving, allowing the user to use the drone in the same way as a camera or camcorder. which, instead of being held by hand, would be carried by the drone.The images collected can be recorded and broadcast, posted on web hosting video sequences, sent to other Internet users, shared on social networks, etc.
These images are intended to be recorded and communicated, it is desirable that they have the least possible defects, including defects caused by the behavior of the drone: indeed, any linear movement of the drone forward, backward or on the side implies a tilting of the drone, and therefore a corresponding, undesirable effect of shift, rotation, oscillation ... of the camera which, in practice, induces various unwanted artifacts on the final image presented to the camera. 'user.
These defects may be tolerable in an "immersive steering" configuration. On the other hand, if it is to use the drone in the manner of a mobile video camera to capture sequences that will be recorded and restored later, these defects are extremely troublesome, so it is desirable to reduce them to a minimum.
In the case of the aforementioned Bebop Drone, it implements a camera equipped with a fisheye type hemispherical field lens covering a field of about 180 °, but of which only a part of the captured field is used, this corresponding part approximately to the angular sector captured by a conventional camera.
To do this, a particular window (hereinafter "capture area") is selected in the overall hemispherical image formed on the surface of the sensor. This window is mobile in rotation and in translation, and permanently moved according to the movements of the drone determined by the inertial unit, and in the opposite direction of these movements. The image collected by the fisheye lens certainly undergoes the same movements of oscillation and rotation as a conventional camera, but the displacement of the image zone is enslaved so as to compensate these movements and thus produce a stabilized image at the same time. respect of the movements of the drone. The image of the capture zone, more exactly a useful part (hereinafter "gross useful area") thereof is then subjected to reprojection processing to compensate for the geometric distortions introduced by the fisheye lens. : straightening of straight lines bent by the lens, restoration of uniform magnification between the center and periphery of the image, etc. The final image obtained ("rectified pay zone") is then transmitted to the user for on-screen viewing, recording, etc. A "virtual camera" is thus defined by extracting from the captured total scene a particular zone (the capture zone) which is dynamically displaced, in rotation and in translation, in the initial image in the opposite direction to the movements of the drone in order to annihilate the oscillations that would otherwise be observed on the final image presented to the user, then applying an image rectification process to deliver a representation of the scene devoid of geometric distortions and the like. This technique is described in application FR 14 56302 of July 2, 2014 for "rotary-wing drone equipped with a video camera delivering stabilized image sequences", in the name of the Applicant.
The present invention aims at the elimination of a particular defect which appears during certain evolutions of the drone.
This defect concerns the dynamic control of a certain number of operating parameters of the camera, namely parameters which are automatically adjusted by image analysis algorithms such as self-exposure algorithms (AE, based on a analysis of the brightness of the different points of the image), automatic balance of the plans (AWB, based on a colorimetric analysis of the different points of the image) or automatic focusing (AF, based on an analysis of the contrast of the different points of the image).
In the remainder of the description, the automatic exposure control will be taken as a particular case, but the invention is not limited to the control of this parameter and, as will be understood, can be applied to the automatic control of the exposure. other settings based on image analysis, such as white balance and focus.
The principle of a self-exposure algorithm (AE) is to choose for the sensor a couple {exposure time, gain} to capture any scene with the same target brightness. This choice is made from an analysis of a version of the reduced definition image (for example 64 x 48 pixels), hereinafter "imagette", obtained by under-sampling or decimation, and of which is extracts a luminosity histogram as well as possibly other parameters, these different starting data being hereinafter referred to as the general term "statistics" of the image.
In the case described above of a capture zone extracted from the overall image collected by the sensor, it is the content of the capture zone that produces the statistics used to calculate the self-exposure control parameters.
But, as explained above, this capture area is larger than the final useful area that will be presented to the user, so that the self-exposure algorithm can make decisions based on elements of the scene that the user does not see, that is to say elements located inside the capture zone but outside the useful area.
However, the scene that is to be well exposed is that which is seen by the user (the useful area), and not the capture area, which differs from the latter.
In the example of an image comprising a part of sky and a part of ground, the proportion between sky and ground will vary according to the inclination of the camera, and therefore according to the attitude of the drone. In fact, if the drone moves from an attitude of hovering to an inclined, plunging attitude (tilting linked to a linear displacement towards the front), then the camera, lowered towards the ground (because it is linked to the body of the drone ) will capture a much larger proportion of soil. As the ground is darker, servoing the self-exposure algorithm will tend to compensate for this variation in brightness by an increase in exposure time and / or gain. However, because of the displacement of the capture zone in the initial image and the reprojection operated to extract the useful zone, the user will always see the same scene. But this scene will be temporarily overexposed due to the corrective action of self-exposure, overexposure that will disappear when the drone returns to its initial attitude - and without the contours of the image seen by the user has changed .
Such is the problem that the invention aims to solve.
To do this, the invention proposes a drone comprising, in a manner known in itself: a camera linked to the body of the drone, comprising a hemispherical field objective of fisheye type pointing in a fixed direction relative to the body of the drone, a digital sensor collecting the image formed by the lens and delivering raw image data, and means for dynamically controlling at least one shooting parameter among: self-exposure, white balance and autofocus; an inertial unit capable of measuring the Euler angles characterizing the instantaneous attitude of the drone with respect to an absolute terrestrial reference and delivering at the output data of current attitude of the drone; extracting means capable of defining in said image formed on the extent of the sensor the position of a reduced-size capture zone; servo-control means, receiving as input the current attitude data of the drone and able to dynamically modify the position and the orientation of the capture zone in said image in a direction opposite to that of the changes of the values of the measured angles by the inertial unit; reprojection means, receiving as input image data of a user area extracted from the capture area and outputting corresponding corrected image data, compensated for geometric distortions introduced by the fisheye lens; and analysis means capable of delivering a current value of said at least one shooting parameter as a function of the image data contained in a picture taken from the capture area.
In a manner that is characteristic of the invention, this drone further comprises: compensating means cooperating with the inertial unit and with the analysis means so as to receive as input the current attitude data of the drone and dynamically modify the operation analysis means according to said current attitude data of the drone to maintain at least one shooting parameter a value substantially independent of the instantaneous variations of attitude of the drone.
Advantageously, the analysis means are furthermore able to exclude from said image data contained in the image coming from the capture zone the raw image data which are located outside the region of the image formed by the image. objective on the sensor.
According to a first embodiment, the compensating means receive as input the image data included in the image coming from the capture zone, delivered by the extractor means, and the analysis means comprise means capable of dynamically defining at each image a plurality of regions of interest ROI distributed in the capture zone. The compensating means then comprise means capable of modifying the size of the ROIs and / or the positioning of the ROIs in the capture zone.
Advantageously, the compensating means comprise means able to precede from the definition of the ROI those ROIs which are located outside the current user zone included in the capture zone.
The compensating means may in particular comprise means capable of attributing to each ROI a weighting value that is a function of the greater or lesser extent of the recovery of the ROI with the current user zone defined inside the capture zone. this value being maximum for ROIs fully included in the current user area and less for overlapping ROIs extending both inside and outside the current user area.
The compensating means may also comprise means capable of attributing to each ROI a weighting value that is a function of the greater or lesser area of the ROI.
According to a second embodiment, the compensating means receive as input the image data included in the image coming from the capture zone, delivered by the extractor means, and the analysis means comprise means able to define a grid ROI regions of interest distributed uniformly and in a predetermined manner in the capture zone. The compensating means then comprise means able to allocate to each ROI a weighting value that is a function of the extent of overlap of the ROI with the current user zone defined inside the capture zone, this value being maximum for the ROIs included in the current user area, minimum for outside ROIs at the current user area, and intermediate for overlapping ROIs extending both inside and outside the current user area.
According to a third embodiment, the compensating means receive as input the rectified image data, compensated for the geometrical distortions introduced by the fisheye lens, delivered by the reprojection means.
In this case, the analysis means may in particular comprise means capable of dynamically defining at each image a plurality of ROI regions of interest distributed in the rectified image, the compensating means then comprising means capable of modifying the size of the ROIs. and / or the positioning of the ROIs in the rectified image. ο
An embodiment of the present invention will now be described with reference to the appended drawings in which the same references designate identical or functionally similar elements from one figure to another.
Figure 1 is an overview showing the drone and associated remote control device for remote control.
Figures 2a and 2b illustrate changes in camera sighting directions driven by a forward tilting of the drone, for example during an acceleration phase.
FIG. 3 illustrates in (a1) - (a3) the successive steps of windowing and correction of the distortions applied to the image formed on the sensor of the camera of the drone so as to produce a rectified image, and in (b1) - (b3) the corresponding steps when the attitude of the drone changes, with the camera lowered to the ground.
FIG. 4 illustrates in (a) and (b) the luminosity histograms respectively obtained by analysis of the capture zones (a2) and (b2) of FIG. 3.
Figure 5 illustrates in block diagram form the various modules involved in the implementation of the invention.
Figure 6 is a flowchart explaining the main successive steps of implementation of the invention, in a first and a second embodiment thereof.
FIG. 7 illustrates the zones of analysis of the capture zone according to the first embodiment of the invention, from regions of interest automatically defined inside this capture zone.
FIG. 8 illustrates the zones of analysis of the capture zone according to the second embodiment of the invention, from a grid of the capture zone.
Figure 9 is a flowchart explaining the main successive steps of implementation of the invention, in a third embodiment thereof. 0
Exemplary embodiments of the present invention will now be described.
In Figure 1, reference numeral 10 generally denotes a drone, which is for example a quadricopter such as the Bebop Drone model of Par-rot SA, Paris, France. This drone comprises four coplanar rotors 12 whose engines are controlled independently by an integrated navigation system and attitude control. It is provided with a front-facing camera 14 for obtaining an image of the scene towards which the drone is oriented.
The drone also includes a vertical aiming camera (not shown) pointing downwards, capable of capturing successive images of the terrain overflown and used in particular to evaluate the speed of the drone relative to the ground. Inertial sensors (accelerometers and gyrometers) make it possible to measure with a certain accuracy the angular velocities and the angles of attitude of the drone, that is to say the angles of Euler (pitch φ, roll Θ and yaw ψ) describing the inclination of the drone with respect to a horizontal plane of a fixed terrestrial reference. An ultrasonic range finder disposed under the drone also provides a measurement of the altitude relative to the ground.
The drone 10 is controlled by a remote remote control device 16 provided with a touch screen 18 displaying the image embedded by the front camera 14, with a number of symbols superimposed enabling the activation of control commands by simple contact of the The device 16 is provided with radio link means with the drone, for example of the Wi-Fi local network type (IEEE 802.11), for the bidirectional exchange of data of the device. drone 10 to the apparatus 16, in particular for transmitting the image picked up by the camera 14, and the apparatus 16 to the drone 10 for sending pilot commands. The remote control device 16 is also provided with inclination sensors making it possible to control the attitude of the drone by printing the apparatus with corresponding inclinations along axes of roll and pitch, it being understood that the two longitudinal and transverse components of the horizontal velocity of the drone 10 will be intimately related to the inclination along the two respective axes of pitch and roll. The piloting of the drone consists in making it evolve by: a) rotating about a pitch axis 22, to move it forward or backward; b) rotation about a roll axis 24, to shift it to the right or to the left; c) rotation about a yaw axis 26, to pivot to the right or to the left the main axis of the drone; and (d) downward or upward translation by changing the speed of the gasses to respectively reduce or increase the altitude of the drone.
When these control commands are applied by the user from the remote control device 16, the commands a) and b) pivoting about the pitch axes 22 and the rolling axis 24 are obtained by inclinations of the apparatus 16 respectively about its longitudinal axis 28 and its transverse axis 30: for example, to advance the drone simply tilt the remote control device 16 forward by leaning it around the axis 28, to deport on the right, it suffices to tilt the remote control device 16 by leaning it about the axis 30 to the right, and so on. The commands c) and d), as for them, result from actions applied by contact of the user's finger 20 with corresponding specific areas of the touch screen 18.
The drone also has an automatic and autonomous stabilization hovering system, activated in particular as soon as the user removes his finger from the touch screen of the aircraft, or automatically at the end of the takeoff phase, or in case interruption of the radio link between the aircraft and the drone.
Figure 2a schematically illustrates, in profile, the attitude of the drone when it is stationary, in a state of levitation.
The field covered by a front camera 14 of conventional type has been schematized at 36, for example a camera covering a field of 54 ° and whose axis of vision δ is centered on the horizon.
If, as shown in Figure 2b, the drone progresses forward with a nonzero horizontal speed, by design the axis 26 of the drone will be inclined forward by an angle φ (pitch angle) relative to the vertical V.
This inclination towards the front, schematized by the arrow 38, implies an inclination of the same value, shown schematically by the arrow 40, of the axis δ of the camera with respect to the plane of the horizon HZ. We understand as well as the developments of the drone, its acceleration, slowdowns, etc. the δ axis oscillates permanently around the direction of the horizon HZ, which will be reflected in the image by permanent oscillation movements up and down.
Similarly, if the drone shifts to the right or to the left, this movement will be accompanied by a pivoting around the roll axis 24, which will be reflected in the image by rotations in one direction or in one direction. the other of the scene captured by the camera.
In order to overcome these drawbacks, it has been proposed, as explained in the above-mentioned application FR 14 56302, to equip the camera with a fisheye-type hemispherical field lens covering a field of about 180 °, schematized at 42 in FIG. 2a. The image captured by the camera equipped with this fisheye lens will certainly undergo the same movements of oscillation and rotation as a conventional camera but we will use only part of the field captured by this camera by selecting a particular window corresponding to the angular sector 36 captured by a conventional camera, and which will be dynamically moved in the hemispherical image in the opposite direction of the movements of the drone as determined by the inertial unit, in order to annihilate the oscillations that otherwise would be observed on the image.
Thus, in the case illustrated in FIG. 2b, where the drone plunges downward from a pitch angle φ (arrow 38) with respect to the vertical V, the capture window will be moved upwards (arrow 44) by an angle of the same value, thereby reducing in the direction of the horizon HZ the central axis of the sector 36 of a "virtual camera" covering the field corresponding to the image area.
The problem of the invention will now be explained with reference to FIG. 3.
Figure 3 shows in (a1) an example of a scene, as collected by the sensor of the video camera equipped with the fisheye lens.
As can be seen, the image I of this scene has very strong geometric distortions, inherent to the hemispherical or quasi-hemispherical coverage of the fisheye lens, rectified on the flat surface of the sensor.
Only a portion of this image I produced by the fisheye lens will be used. This part is determined according to i) the direction in which the "virtual camera" is pointed, ii) its field of view (shown at 36 in Figures 2a and 2b) and iii) its ratio width / height. Thus, a "capture zone" ZC containing raw pixel data including the "raw useful zone" ZUb which corresponds to the field of the "virtual camera" after compensation for geometric distortions introduced by the fisheye lens is defined.
Note that it is not necessary to capture all the pixels of the image I formed on the sensor, but only a fraction thereof, corresponding to the capture area ZC, for example a window ZC of approximately 2 Mpixel extracted from an I image in HD quality (1920 x 1080 pixels) produced by a sensor whose resolution will typically be 14 Mpixel (4608 x 3288 pixels). Therefore, only the pixel data of the capture zone ZC actually needed is transferred which can then be refreshed at a rate of 30 frames / second without any particular difficulty. It is thus possible to choose a high resolution sensor while maintaining a high image rate.
The views (a2) and (a3) of Figure 3 illustrate the processings performed on the pixel data of the capture area ZC to arrive at the final compensated image of the geometric distortions: from the pixel data transferred from the area ZC capture (view (a2)) an algorithm extracts the pixel data of the gross useful area ZUb and applies a mesh of triangles (technique in itself known) and then straightens the image by stretching each triangle, to give a rectified useful image ZUr (view (a3)) with rectified pixel data. The strongly curved horizontal lines of the fisheye image will be corrected to make them straight and produce an image corresponding to a natural vision, devoid of geometric distortions.
The views (b1) - (b3) of Figure 3 are homologous to the views (a1) - (a3), in a configuration where the attitude of the drone is changed due to a tilt of the latter forward (rotation around its axis of pitching), consecutive inclination for example to a transition from an immobile configuration, hovering, to a configuration of forward progression (the linear speed of the drone being even greater than its tilt is strong).
As illustrated in (b1), to compensate for this downward inclination of the drone, the capture zone ZC is displaced towards the top of the image, thus in a direction opposite to that of the inclination of the drone. If the relative position of the gross useful area ZUb remains substantially the same within the catch area ZC (to allow the continuation of the target scene), the catch area will now include a much larger share of soil S than sky C: if we compare the views (a2) and (b2), we see that in the initial configuration (view (a2)) the sky / ground ratio is about 50/50%, while in the modified configuration ((view (b2)) the sky / soil ratio is about 25/75%, and if it is strongly shifted up the capture area may include X areas that are located outside the region of the circular image formed by the fisheye lens on the sensor.
On the other hand, the final image ZUr of the rectified useful zone (view (b3)) will be substantially identical to what it was (view (a3)) before the drone has tilted forwards.
FIG. 4 illustrates the histograms obtained by analyzing the brightness of the pixels of the thumbnail resulting from the capture zone ZC, respectively in (a) in the case of the view (a2) and in (b) in the case of the view (b2). As can be seen in this figure, the dropping of the drone towards the front results in a significant modification of the brightness histogram, with a shift to the left of the average value M due to the increase in the ground ratio. / sky in the image of the ZC area. The self-exposure algorithm will interpret this change in the average value M as a darkening of the image, which will be automatically compensated by an increase in the exposure time and / or sensitivity of the camera.
Therefore, the final images (a3) and (b3) respectively obtained (images of the upright area ZUr), if they present the user the same frame of the scene, differ by their exposure setting, the image of the view (b3) being clearer than that of the view (a3).
The object of the present invention is to correct this defect.
Figure 5 illustrates in block diagram form the various modules involved in the implementation of the invention.
The front camera 14 of the drone delivers a raw image signal corresponding to the image I. This camera, mechanically linked to the body of the drone, is subject to angular displacements which are measured by an inertial unit (IMU) 12 linked to the body of drone and so to the camera. The rotations of the camera are given by the pitch angle φ, the roll angle Θ and the yaw angle ψ describing the inclination of the drone in the three dimensions with respect to a fixed landmark (Euler angles ). These data are applied to an angle prediction module 48 driving a module for calculating the position of the capture zone ZC in the image I. A video processing module 52 receives as input the raw image signal I and ensures various windowing operations as a function of the position of the capture zone ZC calculated by the module 50, image stabilization, extraction and recovery of the useful area, to output to the user a signal d ZUr useful image for transmission to the user, possible display and recording.
The module 52 also provides control (diagrammatically by the return 54) of the operating parameters of the camera, namely the AE self-exposure control, the AWB white balance and the AF autofocus. The module 52 also corrects, according to the invention, the defect described above relating to the automatic calculation of these operating parameters of the camera, as will be described below.
Figure 6 is a flowchart explaining the main successive steps of implementation of the invention, in a first and a second embodiment thereof.
This flow chart 100 comprises an initial step (block 102) of collection of the raw image I by the camera, followed by a step (block 104) of extraction in this raw image of the capture zone ZC as a function of the data of the image. attitude of the drone delivered by the IMU inertial unit. The following step (block 106), characteristic of the invention, consists of an analysis of the image data of the capture zone, as will be explained in detail below with reference to FIGS. 7 and 8, for the delivery of control parameters of the camera, including the self-exposure parameter.
The content of the capture zone ZC is then subjected to a treatment (block 108) of extraction of the gross useful area ZUb and reprojection of this gross useful area ZUb to give the rectified pay zone ZUr, corresponding to the final corrected image delivered to the user.
Referring to Figures 7 and 8, two possible embodiments of the data analysis performed by block 106 will now be described in greater detail.
Incidentally, this analysis is performed on the basis of the image from the image originally contained in the capture zone ZC (downstream of the block 104), before the reprojection step (block 108), so on a distorted version of the image.
FIG. 7 illustrates the zones of analysis of the capture zone according to the first embodiment of the invention, analysis carried out from regions of interest ROI defined automatically inside the image coming from the zone of capture.
In this embodiment, the image analysis device defines (according to techniques known per se, which will not be described in more detail) a plurality of regions of interest ROI, which are geometric selections of areas of reduced size in the image to be analyzed, a brightness histogram being established for each of these areas. The selfexposure algorithm analyzes and compares the histograms corresponding to the different ROIs and adjusts the exposure level accordingly, according to analysis techniques also known per se.
Characteristically according to the invention, the ROIs are distributed in the ima-gette issuing from the capture zone so as to be wholly or partially within the gross useful zone ZUb, that is to say that if the algorithm for defining ROIs generates ROIs outside the raw useful area ZUb, the latter will be excluded from the subsequent analysis for the control of self-exposure. In any case, the pixel data outside the region of the image formed on the sensor by the objective (X regions of views (b1) and (b2) of Figure 3) are excluded from the analysis. .
In other words, the pixels outside the image circle are ignored for the calculation of self-exposure.
Moreover, each of the regions of interest ROh ... ROIn is assigned a weighting value taking into account the greater or lesser extent of the overlap of the concerned ROI with the gross user area ZUb defined inside. the catch area: the weighting will be maximum for the ROIs fully included in the ZUb zone, zero for the ROIs located entirely outside the ZUb zone (which excludes them from the analysis), and intermediate for the ROI partially included in the ZUb area (the higher the weight of the area of the ROI located inside the ZUb area).
FIG. 8 illustrates a second embodiment, in which the ROIs are no longer defined dynamically and with a variable size, but in the form of a regular grid of the image taken from the capture zone ZC, with a grid GR of which each square or elementary rectangle ROI (i, j) will be assigned a variable weighting according to its more or less overlapping character with the zone ZUb, in the same manner as in the first embodiment, exposed in relation with Figure 7.
Figure 9 is a flowchart explaining the main successive steps of implementation of the invention, in a third embodiment thereof.
In this third embodiment, the analysis of the image data is not performed on the image obtained from the initial, deformed version of the image (capture area ZC and gross useful area ZUb) with a weighting applied to each region of interest, but on the image from the rectified image after the reprojection step.
In the flowchart 200, the blocks 202 (image collection), 204 (extraction of the capture zone ZC) and 206 (extraction and reprojection of the user zone) are homologous to the respective blocks 102, 104 and 108 of the Figure 6.
On the other hand, the data analysis step for controlling the self-exposure parameters of the camera (block 208) is performed downstream of block 206, that is to say on the corrected version of the image. Selfexposure then works conventionally (automatic definition of ROI, etc.) without the need to apply to each ROI a weighting value reflecting the position of this ROI with respect to the gross useful area ZUb.
权利要求:
Claims (10)
[1" id="c-fr-0001]
A drone (10) comprising: a camera (14) connected to the body of the drone, comprising: a fisheye-type hemispherical field lens pointing in a fixed direction relative to the body of the drone; A digital sensor collecting the image (I) formed by the objective and delivering raw image data; and means for dynamically controlling at least one shooting parameter among: auto-exposure, white balance and autofocus; an inertial unit (16) capable of measuring the Euler angles (φ, θ, ψ) characterizing the instantaneous attitude of the drone with respect to an absolute terrestrial reference and delivering at the output data of current attitude of the drone; extracting means (52) able to define in said image (I) formed on the extent of the sensor the position of a reduced-size capture zone (ZC); servo-control means (48, 50, 52), receiving as input the current attitude data of the drone and able to dynamically modify the position and the orientation of the capture zone (ZC) in said image (I) in a direction contrary to that of the changes of the values of the angles measured by the inertial unit; - reprojection means (52), receiving as input image data of a user area (ZUb) extracted from the capture area (ZC) and outputting corresponding corrected image data (ZUr), compensated geometric distortions introduced by the fisheye lens; and analysis means capable of delivering (54) a current value of said at least one shooting parameter according to the image data contained in a thumbnail image originating from the capture zone (ZC), characterized in that it further comprises: - compensating means (52) cooperating with the inertial unit and with the analysis means so as to receive as input the current attitude data of the drone and dynamically modify the operation of the means of analyzing according to said current attitude data of the drone to keep said at least one shooting parameter a value substantially independent of the instantaneous variations of attitude of the drone.
[2" id="c-fr-0002]
2. The drone of claim 1, wherein the analysis means are further able to exclude from said image data contained in the ima-gette from the capture area (ZC) the raw image data which are located outside (X) of the region of the image formed by the lens on the sensor.
[3" id="c-fr-0003]
3. The drone of claim 1, wherein the compensating means receive at input (104) the image data included in the thumbnail from the capture zone (ZC), delivered by the extractor means.
[4" id="c-fr-0004]
4. The drone of claim 3, wherein: the analysis means comprise means capable of dynamically defining at each image a plurality of regions of interest ROI (ROh ... ROI7) distributed in the capture zone ( ZC); and the compensating means comprise means capable of modifying the size of the ROIs and / or the positioning of the ROIs in the capture zone.
[5" id="c-fr-0005]
5. The drone of claim 4, wherein the compensating means comprise means capable of precluding from the definition of the ROI those ROI which are located outside the current user zone (ZUb) included in the capture zone. (ZC).
[6" id="c-fr-0006]
6. The drone of claim 5, wherein the compensating means comprise means adapted to allocate (106) to each ROI a weighting value that is a function of the greater or smaller extent of the overlap of the ROI with the current user zone. (ZUb) defined within the capture area, this value being maximum for ROIs fully included in the current user area and less for overlapping ROIs extending both inside and outside of the current user area. the current user area.
[7" id="c-fr-0007]
7. The drone of claim 5, wherein the compensating means comprise means adapted to allocate each ROI a weighting value according to the greater or smaller surface of the ROI.
[8" id="c-fr-0008]
8. The drone of claim 3, wherein: the analysis means comprise means capable of defining a grid (GR) of regions of interest ROI (ROI (ij)) distributed uniformly and in a predetermined manner in the zone; capture (ZC); and the compensating means comprise means capable of assigning (106) to each ROI a weighting value that is a function of the extent of overlap of the ROI with the current user zone (ZUb) defined inside the capture zone. (ZC), this value being maximum for ROIs included in the current user area, minimum for ROIs outside the current user area, and intermediate for overlapping ROIs extending both inside and outside of the current user area.
[9" id="c-fr-0009]
9. The drone of claim 1, wherein the compensating means receive at input (206) the rectified image data (ZUr) compensated geometric distortions introduced by the fisheye lens, delivered by the reprojection means.
[10" id="c-fr-0010]
10. The drone of claim 9, wherein: - the analysis means comprises means adapted to dynamically define each image a plurality of regions of interest ROI distributed in the rectified image (ZUr); and the compensating means comprise means capable of modifying the size of the ROIs and / or the positioning of the ROIs in the rectified image (ZUr).
类似技术:
公开号 | 公开日 | 专利标题
EP3142353B1|2019-12-18|Drone with forward-looking camera in which the control parameters, especially autoexposure, are made independent of the attitude
EP3142354B1|2018-03-14|Drone with forward-looking camera with segmentation of the image of the sky in order to control the autoexposure
EP3048789B1|2016-12-28|Drone provided with a video camera and means to compensate for the artefacts produced at the greatest roll angles
EP2933775B1|2016-12-28|Rotary-wing drone provided with a video camera supplying stabilised image sequences
EP3025770B1|2017-01-25|Video system for piloting a drone in immersive mode
EP3078402B1|2017-10-04|System for piloting an fpv drone
EP3086195B1|2019-02-20|System for piloting a drone in first-person view mode
EP3171589B1|2018-06-06|Drone equipped with a video camera sending sequences of images corrected for the wobble effect
EP3007429B1|2016-12-28|Mobile apparatus, in particular a rotary-wing drone, provided with a video camera supplying image sequences with dynamic correction of the wobble effect
EP3076258B1|2018-08-01|Drone provided with a video camera with compensated vertical focussing of instantaneous rotations for estimating horizontal speeds
JP2017085551A|2017-05-18|Method of determining duration of exposure of camera on board drone, and associated drone
EP3316068B1|2019-03-06|Autonomous system for collecting moving images by a drone with target tracking and upkeep of the target collection angle
EP3273318B1|2021-07-14|Autonomous system for collecting moving images by a drone with target tracking and improved target positioning
FR3054713A1|2018-02-02|DRONE HAVING AN OBSTACLE REMOVAL SYSTEM
FR3038482A1|2017-01-06|CAMERA BLOCK CAPABLE OF INBOARDING A DRONE FOR MAPPING A FIELD AND METHOD OF MANAGING IMAGE CAPTURE BY A CAMERA BLOCK
EP3273317A1|2018-01-24|Autonomous system for taking moving images, comprising a drone and a ground station, and associated method
EP3392728A1|2018-10-24|Method for piloting a rotary wing drone, related computer program, electronic apparatus and drone
FR2707129A1|1995-01-06|Large-field, high-resolution shooting device for aerial vehicle.
FR3020168A1|2015-10-23|ROTATING WING DRONE WITH VIDEO CAMERA DELIVERING STABILIZED IMAGE SEQUENCES
FR3079943A1|2019-10-11|ELECTRONIC DEVICE AND METHOD FOR CONTROLLING A DRONE WITH TRAVELING COMPENSATION EFFECT, ASSOCIATED COMPUTER PROGRAM
FR3082012A1|2019-12-06|ELECTRONIC DEVICE, AND METHOD, FOR CONTROLLING A DRONE, COMPUTER PROGRAM
同族专利:
公开号 | 公开日
FR3041134B1|2017-09-29|
JP2017103751A|2017-06-08|
US20170236291A1|2017-08-17|
EP3142353A1|2017-03-15|
EP3142353B1|2019-12-18|
CN106534627A|2017-03-22|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
JP2003267295A|2002-03-14|2003-09-25|Foundation For Nara Institute Of Science & Technology|Remote operation system|
US20120307042A1|2011-06-02|2012-12-06|Hon Hai Precision Industry Co., Ltd.|System and method for controlling unmanned aerial vehicle|
US20130021520A1|2011-07-20|2013-01-24|Samsung Electronics Co. Ltd.|Apparatus and method for automatically adjusting focus in image capturing device|
US20150222816A1|2012-09-11|2015-08-06|Makoto Shohara|Imaging controller and imaging control method and program|
US20140176722A1|2012-12-25|2014-06-26|Casio Computer Co., Ltd.|Imaging device, imaging control method and storage medium|
JP4931055B2|2006-11-22|2012-05-16|ソニー株式会社|Image processing apparatus and image processing method|
FR2938774A1|2008-11-27|2010-05-28|Parrot|DEVICE FOR CONTROLLING A DRONE|
JP2011015244A|2009-07-03|2011-01-20|Sanyo Electric Co Ltd|Video camera|
FR2957266B1|2010-03-11|2012-04-20|Parrot|METHOD AND APPARATUS FOR REMOTE CONTROL OF A DRONE, IN PARTICULAR A ROTATING SAIL DRONE.|
FR2985329B1|2012-01-04|2015-01-30|Parrot|METHOD FOR INTUITIVE CONTROL OF A DRONE USING A REMOTE CONTROL APPARATUS|
FR2985581B1|2012-01-05|2014-11-28|Parrot|METHOD FOR CONTROLLING A ROTARY SAILING DRONE FOR OPERATING A SHOOTING VIEW BY AN ON-BOARD CAMERA WITH MINIMIZATION OF DISTURBING MOVEMENTS|
FR3000813B1|2013-01-04|2016-04-15|Parrot|ROTARY SAILING DRONE COMPRISING MEANS FOR AUTONOMOUS POSITION DETERMINATION IN AN ABSOLUTE FLOOR - RELATED MARK.|
FR3020169A1|2014-04-16|2015-10-23|Parrot|ROTATING WING DRONE WITH VIDEO CAMERA DELIVERING STABILIZED IMAGE SEQUENCES|FR3038482B1|2015-06-30|2017-08-11|Parrot|CAMERA BLOCK CAPABLE OF INBOARDING A DRONE FOR MAPPING A FIELD AND METHOD OF MANAGING IMAGE CAPTURE BY A CAMERA BLOCK|
JP6942940B2|2016-03-14|2021-09-29|カシオ計算機株式会社|Image processing equipment, image processing methods and programs|
US10538326B1|2016-08-31|2020-01-21|Amazon Technologies, Inc.|Flare detection and avoidance in stereo vision systems|
US20180210442A1|2017-01-23|2018-07-26|Qualcomm Incorporated|Systems and methods for controlling a vehicle using a mobile device|
JP6606530B2|2017-06-20|2019-11-13|キヤノン株式会社|Image processing apparatus, control method therefor, imaging apparatus, and monitoring system|
JP2019041185A|2017-08-23|2019-03-14|キヤノン株式会社|Control device, control system, control method, program, and storage medium|
JP6656214B2|2017-09-04|2020-03-04|キヤノン株式会社|Flying object, moving device, control method, program, and storage medium|
JP2019057891A|2017-09-22|2019-04-11|キヤノン株式会社|Information processing apparatus, imaging apparatus, information processing method, and program|
CN109496424B|2017-09-30|2021-06-18|深圳市大疆创新科技有限公司|Light supplement control method, light supplement control module and unmanned aerial vehicle|
US20200379487A1|2017-10-25|2020-12-03|Rakuten, Inc.|Unmanned aerial vehicle control system, unmanned aerial vehicle control method, and program|
JP6459012B1|2017-10-30|2019-01-30|エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co.,Ltd|Control device, imaging device, flying object, control method, and program|
CN107977993A|2017-11-17|2018-05-01|杨俊刚|A kind of refocusing method and system based on re-projection for light field array camera|
JP6688277B2|2017-12-27|2020-04-28|本田技研工業株式会社|Program, learning processing method, learning model, data structure, learning device, and object recognition device|
RU183717U1|2018-06-26|2018-10-01|Российская Федерация, от имени которой выступает ФОНД ПЕРСПЕКТИВНЫХ ИССЛЕДОВАНИЙ|Multirotor unmanned aerial vehicle|
WO2020191665A1|2019-03-27|2020-10-01|深圳市大疆创新科技有限公司|Rotary photographing method, control device, movable platform and storage medium|
CN111345031B|2019-04-25|2021-10-15|深圳市大疆创新科技有限公司|Unmanned aerial vehicle and shooting control method thereof|
KR20210101999A|2020-02-11|2021-08-19|삼성전자주식회사|An electronic device for supporting image quality adjustment and a method thereof|
法律状态:
2016-09-20| PLFP| Fee payment|Year of fee payment: 2 |
2017-03-17| PLSC| Publication of the preliminary search report|Effective date: 20170317 |
2017-07-07| TP| Transmission of property|Owner name: PARROT DRONES, FR Effective date: 20170606 |
2017-09-08| PLFP| Fee payment|Year of fee payment: 3 |
2018-09-19| PLFP| Fee payment|Year of fee payment: 4 |
2019-09-06| PLFP| Fee payment|Year of fee payment: 5 |
2021-06-11| ST| Notification of lapse|Effective date: 20210506 |
优先权:
申请号 | 申请日 | 专利标题
FR1558420A|FR3041134B1|2015-09-10|2015-09-10|DRONE WITH FRONTAL VIEW CAMERA WHOSE PARAMETERS OF CONTROL, IN PARTICULAR SELF-EXPOSURE, ARE MADE INDEPENDENT OF THE ATTITUDE.|FR1558420A| FR3041134B1|2015-09-10|2015-09-10|DRONE WITH FRONTAL VIEW CAMERA WHOSE PARAMETERS OF CONTROL, IN PARTICULAR SELF-EXPOSURE, ARE MADE INDEPENDENT OF THE ATTITUDE.|
US15/256,423| US20170236291A1|2015-09-10|2016-09-02|Drone including a front-view camera with attitude-independent control parameters, in particular auto-exposure control|
JP2016175147A| JP2017103751A|2015-09-10|2016-09-08|Drone with forward-looking camera with control parameters independent of attitude, especially, autoexposure control|
CN201610816480.8A| CN106534627A|2015-09-10|2016-09-09|Drone with forward-looking camera in which the control parameters, especially autoexposure, are made independent of the attitude|
EP16188012.5A| EP3142353B1|2015-09-10|2016-09-09|Drone with forward-looking camera in which the control parameters, especially autoexposure, are made independent of the attitude|
[返回顶部]