![]() METHOD FOR DETERMINING A STATE OF OBSTRUCTION OF AT LEAST ONE CAMERA EMBARKED IN A STEREOSCOPIC SYST
专利摘要:
The invention aims to improve the performance of determining an obstruction state of a stereoscopic system with two or more cameras, with a low error rate and size resources to be easily embedded. To do this, the invention proposes a hybrid approach between local and semi-global methods. According to one embodiment, each stereoscopic image being made from left and right images (11, 12) produced simultaneously, a sharing of each left (11) and right (12) image is performed in corresponding sectors (10). An obstruction rate is determined by a disparity map (20) by sector (10), from the left (11) or right (12) images, and in which each pixel is assigned a disparity corresponding to the best score. pairing. A sector density determination (10) of the disparity map (20) is made by reference to a valid estimated disparity pixel rate. Then a state of obstruction of at least one camera is determined from a weighted average of the closure probabilities of the areas of the disparity map (20) obtained by comparison between the density of the sectors and a predefined density rate. . 公开号:FR3019359A1 申请号:FR1452775 申请日:2014-03-31 公开日:2015-10-02 发明作者:Laure Bajard;Alain Giralt;Sophie Rony;Gregory Baratoff 申请人:Continental Automotive GmbH;Continental Automotive France SAS; IPC主号:
专利说明:
[0001] The present invention relates to a method for determining a state of obstruction, also called blocking, of at least one camera in a stereoscopic system on board a vehicle, in particular a motor vehicle, and helping to assist the driver of the vehicle in his driving. [0002] In the field of vehicle safety and assistance, vehicle video systems are used to detect obstacles - objects or people - or events outside the vehicle. From two onboard cameras, the video system managed by a digital processing system makes it possible to determine the distance between the vehicle and these obstacles. It is then possible to follow various functionalities, for example: the detection of obstacles, the detection of dangers, the detection and recognition of road signs, the impenetrable white line of the road, or even the detection of cars coming in reverse. This last detection can be associated with the management of vehicle lights. The recognition of these obstacles or events is, moreover, brought to the attention of the driver by a reported intervention of driver assistance systems. The reliability of the cameras is then critical and can become decisive, for example when it comes to knowing in real time if, in the absence of obstacles, the road is clear of obstacles or if one of the cameras is at least partially obstructed. Detecting camera obstruction is therefore as important as determining good visibility. It should be noted that a common cause of obstruction is the condensation of water on the optics of the camera. In general, an obstruction detection leads to signaling to the driver the presence of such condensation and can trigger de-icing means. The determination of an obstruction rate of an on-board camera is discussed in US Patent Specification 2013/0070966. In this document, the screen is divided into sectors and the probability of obstruction by sector is analyzed from a measurement of the number of objects detected by their contour in each sector. This is an image sector analysis method. The detection of the obstruction of the camera according to this method offers reduced performances: a partial obstruction rate of the camera is detected only in 75% of the cases, the average distance to realize this detection being of 200 meters. In addition, at startup, an average distance of 30 meters is required to decide on the obstruction of the camera. Using the same sectoral approach, US Pat. No. 8,116,523 35 proposes to generate image data by edge detection ("edge map extraction") and a detection of characteristic points from them. data. The characteristic points are classified according to three detection scanning zones arranged respectively at close, medium distance and remote from the vehicle: an area dedicated to the tracks, another for the branch lines and an area for dead ends or obstacles. In this way, the number of image processing steps is reduced compared to the detection of fixed patterns with a scan of the entire image to verify the correspondence of the image to the models. Other methods have been developed for dual camera stereoscopic systems to provide additional depth information of objects and obstacles of the scene as seen by the driver. The pixel depth of an element of this scene is inversely proportional to the offset, also called "disparity", of the paired pixels of the left and right images corresponding to the initial pixel of the scene and detected respectively by the left and right cameras. A disparity map is made up of all the disparities between the pixels thus matched. [0003] The generation of successive disparity maps over time makes it possible to increase the performance of the driving assistance applications from the depth information of the scene. The use of disparity cards is for example illustrated by the patent documents US 2010/0013908, EP 2 381 416 or FR 2 958 774. [0004] The problem is to correctly match the pixels of the left and right images. Conventionally, the generation of a disparity map takes place in two steps: the determination of different degrees of matching, also called "matching scores", for each pair of pixels and the extraction of an estimation of disparity for each pair of pixels. [0005] The first step takes into account, for each pixel of an analyzed pair, pixels of its environment. The scores reflect the degree of similarity of the pixels of the analyzed pair. The second step makes it possible to assign to each pixel of one of the two left or right images - referred to as reference - the most probable disparity estimated from the matching scores of this pixel. The set of pixels of the reference image on which the retained disparities have been reported constitutes the map of disparity of the stereoscopic image. In general, three types of method have been developed to produce a disparity map according to the method of determining scores and the mode of expression of disparities: local, global and semi-global methods. [0006] Local methods rely on matching scores of each pair of pixels in each image obtained between pixels that immediately surround two pixels to be matched. Various correlation functions can be used (sum of the quadratic deviations, sum of the absolute deviations, standardized cross-correlation centered, etc.) to then determine the disparities of the paired pixels. For each pair of pixels analyzed, the disparity corresponding to the best score is selected. These local methods are the simplest and therefore mobilize fewer resources. They generate maps of disparity of high density, that is to say with a high rate of pixels of disparity estimated valid, the validity based on a criterion of coherence between the disparities of related pixels. However, these local methods have a high rate of errors, especially in occlusion zones and in areas with little texturing - for example for a new road. [0007] Global methods consist of optimizing an energy function defined over the entire reference image. The energy function defines the constraints that the disparity map must respect, for example the continuity of the disparity on the objects. Subsequently, all the disparities that minimize this energy function are sought. The graph-cut method and Belief Propagation are the most studied global methods. These methods give dense disparity images with few errors. However, they are complex to implement and require very significant calculation and storage resources, which are difficult to reconcile with the hardware constraints of boarding. Semi-global methods are based on the same principle as global methods but on sub-parts of the image, ie lines or blocks. The division of the problem of optimizing the energy function into sub-problems makes it possible to reduce the need for computation and memory resources with respect to the global methods, but recursively causes the appearance of artifacts on the map. disparity, with a not inconsiderable error rate and medium to poor disparity map density (as evidenced by the presence of artifacts). The main objective of the invention is to improve the performance of the determination of a state of obstruction of a stereoscopic system with two cameras, with a low error rate while extracting maps of high density disparity, with computing resources and memory of reasonable size to be easily embedded. To do this, the invention proposes a hybrid approach between local and semi-global methods, using particular disparity maps from a division of the pie images in a direct semi-global analysis, without using a d function. 'energy. [0008] For this purpose, the present invention relates to a method for determining a state of obstruction of at least one camera in a multi-camera system on board a vehicle comprising the following steps: - acquisition of stereoscopic images successive fields of view, each stereoscopic image of the multi-camera system being made from left and right images produced simultaneously and stored numerically in the form of pixels, - calculating a disparity map from the multi-image images. successive cameras, and - calculation of an obstruction rate. In this method, the obstruction rate is a weighted average determined by the following successive steps: - division of the disparity map into sectors, - determination of the density of each sector by the estimated pixel disparity rate valid, - determination a probability of stereo obstruction by sector of the disparity map by comparison between the density of this sector and a predefined obstruction rate, and - determination of a weighted average of the closure probabilities of the sectors as a function of a weighting of the position of these sectors in the disparity map. This method then has, compared to other methods using one or more cameras, a greater speed of decision concerning a possible obstruction - even in the absence of objects partially masking the vehicle - a faster detection speed and a higher obstruction detection rate. Thus, to define an obstruction rate in the context of the invention, a single textured surface in the field of view - for example a road - may be sufficient and the presence of objects is not useful. The performances obtained by the invention are related to the speed of the calculations because of the methodology followed. For example, an absence of obstruction of the cameras after the time required for the vehicle to run 12 meters only, instead of 30 meters with the previous methods. In a preferred embodiment: a disparity noise digital filtering can be performed by sector on the disparity map before the density determination of the sectors, the digital filtering of the disparity noise can be carried out by the application of mathematical morphology tools on the disparity map. [0009] According to other particularly advantageous embodiments: each sector being divided into sub-parts called macroblocks, these macroblocks being able to be of the same size and regularly distributed in each sector, the digital filtering of the disparity noise is performed by sector by measuring the density of each macroblock of this sector by its pixel rate whose disparity greater than a threshold is considered valid, the proportion of valid macroblocks determining the density of the sector, - the probability of Stereo obstruction by sector is determined by comparison between the rate of macroblocks estimated valid in this sector and a predefined obstruction rate, the number of sectors can be chosen between substantially 10 and 50, the number of macroblocks by sector can be chosen between substantially 10 and 100. [0010] Advantageously, a combined mono-stereo analysis test is performed in addition to the calculation of the obstruction rate in order to determine the presence of an object in front of a camera. This test provides the following steps, for each analyzed image: - selection of one of the two stereo images to be analyzed, called mono image, - sharing of this mono image into sectors of the same size as those of the disparity map to form a map mono, - calculation of a probability of mono obstruction by sector of the mono image reported on the mono map, - comparison of the stereo obstruction and mono obstruction probabilities, then fusion between the filtered disparity map, with probabilities of Stereo obstruction by sector, and the mono map, with mono obstruction probabilities by sector, by assigning to each sector of the filtered disparity map the lowest probability of obstruction, corresponding to the highest density, for develop a so-called merged filtered disparity map, and - if the probability of merged obstruction is generally less than the probability of stereo obstruction, the corresponding camera the analyzed image is considered to be obstructed, - in the opposite case, a moving object is considered to have masked the field of vision of the analyzed camera and a process is triggered. [0011] Preferably, the computation of the monoblocking probability by sector is performed by edge detection analysis and detection of characteristic points from the detection data. [0012] Other data, characteristics and advantages of the present invention will appear on reading the detailed non-limiting description below, with reference to the appended figures which represent, respectively: FIG. 1, an example of images acquired by both cameras of a stereoscopic system associated with the disparity map formed from these images according to an embodiment of the method of the invention; FIG. 2, an example of a disparity map whose sectors are broken down into macro-blocks according to a particular embodiment of the invention; FIG. 3, an example of weighting for calculating the obstruction rate of a camera of the stereoscopic system; FIGS. 4a and 4b, FIG. flow chart of a mode of determining the obstruction rate of a camera according to the invention using a disparity map, and - Figure 5, a combined mono-stereo approach to detect the presence of a masking temporary field of view of a camera from a detection of 15 mono-type edges and a comparison of mono and stereo obstruction rates. FIG. 1 illustrates an example of left and right images 12 received on the cameras of a stereoscopic visualization system on board a motor vehicle. The disparity map 20 developed by a digital processing system (not shown) from these images 11 and 12 is also reported. The images 11, 12 are cut into 5 × 3 regular sectors 10 and the disparity map 20 is accordingly divided into 5 × 3 sectors. Alternatively, it is possible to sectorise the disparity map directly. The right image 12 is complete, while it appears that the left image 11 does not reproduce at least the upper area of interest A1 seen in the right image 12. The field of view of the left camera corresponding to the image 11 is thus partially closed on this upper zone Al which covers the five upper sectors. The disparity map 20 is determined by reporting sector disparities 10 from the pixels of one of the reference images, here the left image 11. Each disparity represents in the example the distance calculated by the difference 30 quadratic between the pixels of images 11 and 12 that have the best match score. As a result of the partial blockage of the Al area, the disparity map 20 has artifacts 21. These defects result in low density levels per sector as determined from the disparities in these areas - and thus a rate obstruction - as detailed below. With reference to the example of implementation of a particular variant of FIG. 2, each sector 10 of the disparity map 20 is advantageously divided into 6 × 11 macroblocks 30 of the same size (a single macro-block is shown so as not to impair the clarity of the figure). This additional degree of cutting is used for a fine density analysis, presented with reference to Figures 4a and 4b. In this disparity map 20 thus cut into sectors 10 and macroblocks 30 - the disparity densities are measured at each macroblock 30. Then the density is calculated by sector 10 from the disparity densities of the macroblocks 30. this sector 10, a dense sector being defined by a high proportion of macro-blocks of density greater than a given threshold. Overall, the obstruction rate of a camera is calculated by weighting the densities of the sectors according to their position in the image produced by the camera. Figure 3 presents an example of weighting for the calculation of the overall obstruction rate of a camera of the stereoscopic system. Each sector 10 is provided with weighting coefficients 40, and the overall obstruction rate of the camera is calculated by averaging the density levels of the sectors 10 weighted with these coefficients 40. The highest coefficient weights 4 "correspond to sectors 10 previously deemed to be part of areas of major interest. Such a weighting applies to the densities determined by sector 10 of the disparity map 42 shown in FIG. 5. An example of determining a state of obstruction of a camera from a weighted average of the density levels by sector of the disparity map is illustrated by the flow chart in Figure 4a, logic diagram completed in Figure 4b. The set of steps 102 to 108 of Figure 4a is shown in Figure 4b under the name "B" to simplify its reading of this Figure 4b. In FIG. 4a, the first start step 100, called "Start", is used for the initial settings, in particular the calibration of one camera with respect to the other, by association of pairs of small similar zones of each of the images. . Step 101 relates to taking successive left and right stereoscopic images of the field of view, according to a determined frequency, such as images 11 and 12 of FIG. 1. The images are stored in a digital processing system. [0013] These images make it possible to elaborate the disparity map (step 102), by attributing to each pixel a matching score representative of the degree of similarity between two pixels of the left and right stereoscopic images then paired. This card is said to be unfiltered because it may contain disparity noise due to an obstruction, a camera masking by an object or a de-calibration of a camera. The following steps will remove any disparity noise. In step 103, the disparity map is divided into sectors (as described with reference to Figure 2). [0014] In this example, the 15 sectors of the disparity map are divided into macroblocks, 66 macroblocks in this example (step 104). Advantageously, a digital filtering of the disparity noise makes it possible, by a density analysis in each of the macroblocks of each sector (step 105), to define more finely the density of this sector. The density analysis step 105 is performed as follows. For each macroblock, the density is measured by its pixel rate, whose disparity is estimated to be valid by comparison with a threshold. In this example, macroblocks whose density is greater than a rate of 80% are considered valid. Alternatively, digital disparity noise filtering can be performed directly by sector by mathematical morphology tools known to those skilled in the art. Optionally or independently in a parallel sequence, it is then possible, following the step 105 of density analysis, to detect in a decalibration test 106 whether a noise observed is due to a de-calibration or a disturbance of 15 type of obstruction because the disparity map is affected differently in these two cases. This test is performed by performing an initial setup of the embedded multi-camera system. If the de-calibration test concludes a de-calibration, a re-calibration is advantageously performed (step 107). Then, in step 108, the density per sector is determined by the proportion of estimated 20 macroblocks valid within each sector. After determining the density of all the sectors, the disparity map is said to be filtered. The probability of obstruction calculation is then conducted by sector. It consists of step 109 comparing the estimated macro-block rates valid in this sector to a predefined obstruction rate. In the example, if the macro-block rate estimated to be valid within the sector is greater than 35%, this sector is considered unobstructed. Alternatively, it is also possible, in a simplified version of the exemplary embodiment, to dispense with the macroblocks in the analysis steps B (steps numbered 104 to 108). In order to do this, the filtered area densities instead of the valid macro-block densities are directly used in the probability calculation step 109, by comparing the density of each sector (defined in FIG. step 108 of calculating density by the pixel rate per sector whose disparity is estimated to be valid) at the predefined density rate, here 35 `Vo. In the rest of the process, a disparity analysis based on the density of the macro-blocks constituting each of the sectors is taken into account. Once the density of the macroblocks per sector is defined at step 108, the probability of obstruction by sector provided by the calculation (step 109) is said probability of stereo obstruction (to be distinguished from the probability of mono obstruction described below) because performed on areas of disparity map developed from images of the stereoscopic system. FIG. 4b shows the continuation of the method according to the invention which will advantageously make it possible to check whether the noise found on the disparity map is due to a real obstruction or to an object which has obscured the field of view of one of the two cameras. This check consists in establishing the probabilities of obstruction by sector 10c of a so-called "merged" disparity map 42 illustrated with reference to FIG. 5 (described later) from the filtered "stereo" disparity map. and a so-called "mono" card 32, formed from the image provided by the camera analyzed. This mono map corresponds in fact to the selected image (step 112) among the two stereoscopic images 11 and 12 (FIG. 1) taken at step 101: in order to determine the obstruction or masking state of each camera, the analysis is carried out, more precisely, by selecting (step 112) one or the other of the images 11 or 12, the left image 11 in the illustrated example (see FIG. corresponding. Prior to the comparing and merging step (step 115), the selected image is sectorized (step 113) in the same manner as the disparity map to form the mono map 32, in 15 sectors in this example. Then an obstruction probability analysis by a mono method (step 114), an edge detection analysis in the example, is performed on the mono card 32. The calculation of the mono probabilities by sector 10a (FIG. Then, the comparison of step 115 is performed, sector by sector, between the probabilities of stereo obstruction by sector (step 109) which is provided with the stereo disparity map, and the probabilities of Mono obstruction by sector of the mono board. This comparison leads, again in step 115, to provide each sector of the stereo disparity map with the lowest probability of obstruction (or, in other words, the highest density), mono or stereo , of the corresponding sector. The probabilities retained are said to be "merged", and the disparity map thus obtained in step 115, with the probabilities of obstruction thus "merged", is also called "merged". This step 115 of comparison and fusion of the probabilities of obstruction stereo and mono makes it possible to suppress a possible masking of a camera by an object as explained below. [0015] If, in a masking test 116, the probability of overall merged clogging of the disparity map - obtained in step 115 - is greater than or equal to the probability of overall obstruction of the stereo map, the camera corresponding to the image analyzed at the mono obstruction probability analysis step (step 114) is considered to be obstructive but unmasked. Overall obstruction probabilities are calculated by averaging or summing values by sector. In the opposite case, an obstacle or object is considered as having hidden the field of vision of the analyzed camera, for example an object close to this camera. The analysis of the results of this masking test 116 then leads to take, accordingly (step 117), appropriate measures (temporary stop of the camera or intervention on the camera) and / or to inform the driver. If the camera analyzed is not considered masked, then step 119 calculates, with the weighting of the sectors, the overall probability of obstruction based on the weighted average of the probability of obstruction of the sectors. The weighting is previously elaborated in step 118 as a function of the position of the sectors in the image as detailed above (with reference to FIG. 3). The state of obstruction to be returned to the driver of the vehicle is one of the following: no obstruction - condensation - partial obstruction obstruction. The obstruction may be due to condensation, the presence of a sticker, snow, ice or salt or the like on the windshield. In the example, the system is considered as totally obstructed with an overall probability of at least 90%, and between 70% and 90% the system is considered as "degraded" of condensation or partial obstruction. FIG. 5 illustrates the combined mono-stereo approach used to detect the presence of a possible masking of the field of view of a camera, as presented in FIG. 4b in step 115. This detection is carried out by comparison of the probabilities obstruction by stereo and mono sector, comparison from the values reported on the filtered disparity map 22, for the probability of stereo obstruction by sector, and a mono map 32 corresponding to the analyzed image, the image 11 in the example, for the probability of obstruction mono by sector (steps respectively referenced 109 and 114 in Figure 4b). For each pair of corresponding sectors 10a, 10b of these two cards 32 and 22, the mono and stereo densities are compared (arrows Fa, Fb), as described with reference to the logic diagram of Figure 4b. The comparison (arrows Fa, Fb) between the mono and stereo approaches defines, by sector 10c, a modified disparity map 42 with a probability of obstruction fused by sector. For each sector 10c of this map, the probability of obstruction of the camera corresponds to the minimum of mono and stereo densities. If, overall, the density of the disparity map 42 is less than the overall stereo density, the camera is considered obstructed. In the opposite case, it appears that an obstacle or moving object has probably masked the field of vision of the camera analyzed. The invention is not limited to the embodiments described and shown. Thus, the invention can be applied to systems of more than two cameras 5 using the method for each set of cameras of the system (pair, triplet, quadruplet, ...). It is also possible to implement the method according to the invention without cutting the sectors into macroblocks and replacing the discriminant steps as a function of the densities of the macroblocks by steps directly dependent on the densities of the sectors, after filtering. by mathematical morphology or other equivalent filtering. In addition, the sectors mapped from one camera image to another have identical dimensions but the sectors of the same image can be of different dimensions. For example, depending on the security-critical nature, the sectors of the upper rows as well as the upper area of interest A1 (with reference to FIG. 1) may be of smaller size and of more or less weighting. in order to multiply their number and thus the accuracy of the analysis.
权利要求:
Claims (9) [0001] REVENDICATIONS1. A method for determining a state of obstruction of at least one camera in a multi-camera system on board a vehicle comprising the following steps: - acquisition of successive stereoscopic images of a field of view, each stereoscopic image the multi-camera system being produced from left and right images (11, 12) produced simultaneously and stored numerically in the form of pixels, - calculating a disparity map from the successive multi-camera images, and - calculating an obstruction rate, characterized in that the obstruction rate is a weighted average determined by the following successive steps: - division of the disparity map (20) into sectors (10), - determination of the density of each sector (10) by the valid estimated disparity pixel rate (108), - determining a stereo obstruction probability by sector (109) of the disparity map (20) by comparing the density of this sector (10) and a predefined obstruction rate, and - determination (119) of a weighted average of the closure probabilities of the sectors (10) according to a weighting (118) of the position of these sectors. sectors (10) in the disparity map (20). [0002] A method of determining an obstruction state according to claim 1, wherein disparity noise digital filtering (105) is performed per sector (10) on the disparity map (20) prior to the determination (108). density of sectors. [0003] 3. A method for determining an obstruction state according to the preceding claim, wherein the digital filtering disparity noise (105) is performed by the application of mathematical morphology tools on the disparity map (20). [0004] 4. A method of determining an obstruction state according to claim 2, wherein each sector (10) is divided into sub-parts called macroblocks (30), of the same size and regularly distributed in each sector (10). , the disparity noise digital filtering (105) is performed by sector (10) by measuring the density of each macroblock (30) of this sector (10) by its pixel rate whose disparities greater than a threshold are estimated to be valid , the proportion of valid macroblocks (30) determining (108) the density of the sector (10). [0005] 5. Method for determining an obstruction state according to the preceding claim, wherein the probability of sector stereo obstruction (10) is determined by comparison (step 109) between the proportion of estimated macroblocks (30). valid in this sector (10) and a predefined obstruction rate. [0006] A method of determining an obstruction condition according to any one of the preceding claims, wherein the number of sectors (10) is selected from substantially 10 to 50. [0007] The method of determining an obstruction state according to any one of claims 4 or 5, wherein the number of macroblocks (30) per sector (10) is selected from substantially 10 to 100. [0008] The method of determining an obstruction state according to any one of claims 4 to 7, wherein a mono-stereo combined analysis complementary test provides the following steps to determine the presence of object in front of a camera: - selecting (112) one of the two stereo images (11; 12) to be analyzed, called a mono image; partitioning (113) this mono image (11; 12) into sectors (10a) of the same dimension as those (10b) of the disparity map (22) to form a mono map (32); calculating (114) a monoblocking probability per sector (10a) of the mono image (11; 12) plotted on the mono card (32); comparing the probabilities of stereo and mono obstruction, then merging (115) between the disparity map (22), having stereo obstruction probabilities (109) by sector (10b) and the mono map (32), endowed with mono-obstruction probabilities (114) by sector (10a), by assigning to each sector (10b) of the disparity map (22) the lowest probability of obstruction, corresponding to the highest density, for developing a so-called merged disparity map (42); and if the probability of merged obstruction is substantially less than the probability of stereo obstruction (test 116), the camera corresponding to the analyzed image (11; 12) is considered to be obstructed and a process is triggered ( 117) - if not, an object is considered to have masked the field of view of the camera being scanned. [0009] 9. Method for determining an obstruction state according to the preceding claim, wherein the calculation (114) of the probability of obstruction by sector mono (10a) is performed by an edge detection analysis and a detection of characteristic points from the detection data.
类似技术:
公开号 | 公开日 | 专利标题 FR3019359A1|2015-10-02|METHOD FOR DETERMINING A STATE OF OBSTRUCTION OF AT LEAST ONE CAMERA EMBARKED IN A STEREOSCOPIC SYSTEM EP2965287B1|2017-02-22|Method for tracking a target in an image sequence, taking the dynamics of the target into consideration EP2275971A1|2011-01-19|Method of obstacle detection for a vehicle WO2009141378A1|2009-11-26|Method and system for indexing and searching for video documents FR3011368A1|2015-04-03|METHOD AND DEVICE FOR REINFORCING THE SHAPE OF THE EDGES FOR VISUAL IMPROVEMENT OF THE RENDER BASED ON DEPTH IMAGES OF A THREE-DIMENSIONAL VIDEO STREAM EP1999484B1|2010-06-30|Obstacle detection EP1646967B1|2007-01-31|Method for measuring the proximity of two contours and system for automatic target identification WO2016042106A1|2016-03-24|Localisation and mapping method and system EP1785966B1|2008-12-10|Evaluation process, by a motor vehicle, of the characteristics of a frontal element Haeusler et al.2012|Analysis of KITTI data for stereo analysis with stereo confidence measures EP0961227B1|2003-07-30|Method of detecting the relative depth between two objects in a scene from a pair of images taken at different views WO2011124719A1|2011-10-13|Method for detecting targets in stereoscopic images Wang et al.2008|Object recognition using multi-view imaging FR2706211A1|1994-12-16|Method of processing road images and of monitoring obstacles, and device for implementing it Yang2009|Vehicle Detection and Classification from a LIDAR equipped probe vehicle FR3083352A1|2020-01-03|METHOD AND DEVICE FOR FAST DETECTION OF REPETITIVE STRUCTURES IN THE IMAGE OF A ROAD SCENE FR3054355B1|2019-07-12|METHOD AND SYSTEM FOR DETERMINING A FREE SPACE LOCATED IN AN ENVIRONMENT AREA OF A MOTOR VEHICLE EP2769360B1|2016-02-03|Method for locating objects by resolution in the three-dimensional space of the scene EP3069319A1|2016-09-21|System and method for characterising objects of interest in a scene CA2800512C|2018-03-06|Extraction of a document in a series of recorded images FR3084864A1|2020-02-14|METHOD FOR DENSIFYING A DEPTH MAP Ban et al.2011|Unsupervised change detection using multitemporal spaceborne SAR data: A case study in Beijing EP2791687B1|2017-06-21|Method and system for auto-calibration of a device for measuring speed of a vehicle travelling in a three-dimensional space Hernández et al.2013|Segmentation et Interpr'etation de Nuages de Points pour la Mod'elisation d'Environnements Urbains FR3103301A1|2021-05-21|Method for detecting specularly reflected light beam intensity peaks
同族专利:
公开号 | 公开日 FR3019359B1|2017-10-06| US9269142B2|2016-02-23| CN104951747B|2019-02-22| CN104951747A|2015-09-30| US20150279018A1|2015-10-01| DE102015104125A1|2015-10-01|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 FR2884316A1|2005-04-11|2006-10-13|Denso Corp|RAIN DETECTOR| US20090180682A1|2008-01-11|2009-07-16|Theodore Armand Camus|System and method for measuring image quality| EP2293588A1|2009-08-31|2011-03-09|Robert Bosch GmbH|Method for using a stereovision camera arrangement| US20110311102A1|2010-06-17|2011-12-22|Mcdaniel Michael S|Machine control system utilizing stereo disparity density| JP4937030B2|2007-07-24|2012-05-23|ルネサスエレクトロニクス株式会社|Image processing apparatus for vehicle| TWI332453B|2008-07-21|2010-11-01|Univ Nat Defense|The asynchronous photography automobile-detecting apparatus and method thereof| JP5216010B2|2009-01-20|2013-06-19|本田技研工業株式会社|Method and apparatus for identifying raindrops on a windshield| DE102010002310A1|2010-02-24|2011-08-25|Audi Ag, 85057|Method and device for the free inspection of a camera for an automotive environment| FR2958774A1|2010-04-08|2011-10-14|Arcure Sa|Method for detecting object e.g. obstacle, around lorry from stereoscopic camera, involves classifying object from one image, and positioning object in space around vehicle by projection of object in focal plane of stereoscopic camera|US10981505B2|2017-02-28|2021-04-20|Gentex Corporation|Auto switching of display mirror assembly| DE102017220282A1|2017-11-14|2019-05-16|Robert Bosch Gmbh|Test method for a camera system, a control unit of the camera system, the camera system and a vehicle with this camera system| CN109490926B|2018-09-28|2021-01-26|浙江大学|Path planning method based on binocular camera and GNSS| US20210090281A1|2019-03-09|2021-03-25|Corephotonics Ltd.|System and method for dynamic stereoscopic calibration| CN111027398A|2019-11-14|2020-04-17|深圳市有为信息技术发展有限公司|Automobile data recorder video occlusion detection method| US11120280B2|2019-11-15|2021-09-14|Argo AI, LLC|Geometry-aware instance segmentation in stereo image capture processes|
法律状态:
2016-03-21| PLFP| Fee payment|Year of fee payment: 3 | 2017-03-22| PLFP| Fee payment|Year of fee payment: 4 | 2018-03-23| PLFP| Fee payment|Year of fee payment: 5 | 2020-03-19| PLFP| Fee payment|Year of fee payment: 7 | 2021-03-23| PLFP| Fee payment|Year of fee payment: 8 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 FR1452775A|FR3019359B1|2014-03-31|2014-03-31|METHOD FOR DETERMINING A STATE OF OBSTRUCTION OF AT LEAST ONE CAMERA EMBARKED IN A STEREOSCOPIC SYSTEM|FR1452775A| FR3019359B1|2014-03-31|2014-03-31|METHOD FOR DETERMINING A STATE OF OBSTRUCTION OF AT LEAST ONE CAMERA EMBARKED IN A STEREOSCOPIC SYSTEM| DE102015104125.1A| DE102015104125A1|2014-03-31|2015-03-19|A method for determining a state of blindness of at least one camera installed in a stereoscopic system| CN201510144352.9A| CN104951747B|2014-03-31|2015-03-30|Method for determining the obstruction state at least one video camera being mounted in stero| US14/673,068| US9269142B2|2014-03-31|2015-03-30|Method for determining a state of obstruction of at least one camera installed in a stereoscopic system| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|