专利摘要:
SOLID OBJECT AND SOLID OBJECT DETECTION DEVICE. It is a solid object detection device (1) for detecting solid objects on the periphery of a vehicle (V), the solid object detection device (1) comprising: a camera (10) for capturing an image including regions detection (A1, A2) configured in adjacent traffic lanes behind the vehicle (V); a solid object evaluation unit (33) for evaluating whether or not a solid object is present in the images of the detection regions (A1, A2) captured by the camera (10); a lateral position detection unit (34) for detecting a distance (?y) between the position of the vehicle in the traffic lane traveled by the vehicle (V) and a dividing line dividing the traffic lanes; a region setting unit (33b) to cause the size of the detection region (A1 or A2) positioned on the side where the dividing line is present to be increased by a correspondingly greater amount with respect to an increase in distance (?y ) to the dividing line detected by the lateral position detection unit (34); and traffic lane change detection device (35) for detecting a traffic lane change made by the vehicle.(...).
公开号:BR112014001824B1
申请号:R112014001824-3
申请日:2012-07-27
公开日:2021-04-20
发明作者:Daisuke Oiki;Yasuhisa Hayakawa;Chikao Tsuchiya;Osamu Fukata;Yukinori Nishida
申请人:Nissan Motor Co., Ltd;
IPC主号:
专利说明:

Field of Invention
[001] The present invention relates to a solid object detection device and a solid object detection method. Fundamentals of the Invention
[002] In the past, vehicle periphery observation devices have been proposed to use radar to assess whether or not there is a solid object in a detection region behind the vehicle, and notify the driver. With such a periphery and vehicle observation device, the sensing region includes a location which is a blind spot of at least one side mirror, and when the angle of the side mirror changes, the position of the sensing region is changed accordingly (see Patent Document 1). Patent Documents
[003] [Patent Document 1] Japanese Patent Publication submitted for public inspection No. 2000-149197. Invention Summary Problems to be solved by the invention
[004] However, in the device described in Patent Document 1, there is a possibility, depending on the position of the vehicle in the traffic lane, that a solid object, such as another vehicle in an adjacent traffic lane, cannot be detected. To provide a more detailed description, in the device described in Patent Document 1, the sensing region is fixed as long as the angle of the side mirror does not change. In such a state, in cases where the vehicle is close to the left side of the traffic lane and the other vehicle or similar in the right adjacent traffic lane is close to the right side of the first traffic lane, for example, the other vehicle does not enter in the detection region and the solid object can no longer be detected.
[005] The present invention was developed in order to solve such problems in the prior art, being an objective of the invention to provide a solid object detection device and a solid object detection method where the accuracy of detecting solid objects can be improved. Means used to solve the problems
[006] The solid object detection device of the present invention captures an image including a dividing line and a predetermined region of an adjacent traffic lane, and evaluates whether or not there is a solid object in the predetermined region. From the captured image, the solid object detection device detects a distance along the vehicle's width between the vehicle's position and the dividing line in the traffic lane traveled by the vehicle, and the size of the predetermined region positioned on the side where the dividing line is located increases further outwards towards the width of the vehicle corresponding to a distance along the width of the larger vehicle. Effect of the Invention
[007] According to the present invention, the predetermined region in which the distance along the width of the vehicle is positioned on the side where the dividing line is located is increased further outwards in the direction of the vehicle width correspondingly with respect to an increase the distance across the vehicle width between the vehicle position and the dividing line. Thus, it is possible to avoid situations in which because the vehicle is separated from the dividing line, for example, the predetermined region is not properly configured to the adjacent vehicle, and a solid object, such as another vehicle, is outside the predetermined region and fails to be detected. Then, the accuracy of detecting solid objects can be improved. Brief Description of Drawings
[008] FIG. 1 is a schematic diagram of the solid object detection device in accordance with the present embodiment, showing an example of a case in which the solid object detection device is installed in a vehicle.
[009] FIG. 2 is a top view showing the travel state of the vehicle shown in FIG. 1.
[010] FIG. 3 is a block diagram showing details of the computer shown in FIG. 1.
[011] FIG. 4 is a drawing to describe a process of the positional alignment unit shown in FIG. 3, where (a) shows the state of motion of vehicle V and (b) shows a description of the positional alignment.
[012] FIG. 5 is a schematic diagram showing the manner in which differential waveforms are generated by the differential waveform generating unit shown in FIG. 3.
[013] FIG. 6 is a top view showing the travel state of the vehicle shown in FIG. 1, and showing an example of a case in which the vehicle is traveling off-center on the travel track.
[014] FIG. 7 is a top view showing the travel state of the vehicle shown in FIG. 1, showing an example of a case in which the region setting unit has increased the detection region.
[015] FIG. 8 is a graph showing the relationship between the distance across the vehicle's width to the dividing line and the size (increased amount) of the detection region.
[016] FIG. 9 is a flowchart showing the solid object detection method according to the present embodiment.
[017] FIG. 10 is a block diagram showing the details of the computer according to the second modality.
[018] FIG. 11 is a top view showing the travel state of the vehicle when the width of the traffic lane is small, and showing an example of a case in which the region setting unit has increased the detection region.
[019] FIG. 12 is a graph showing the relationship between the distance across the vehicle's width to the dividing line and the size (amount of magnification) of the detection region in the second mode.
[020] FIG. 13 is a flowchart showing the solid object detection method according to the second embodiment, showing the first half of the process.
[021] FIG. 14 is a flowchart showing the solid object detection method according to the second embodiment, showing the second half of the process.
[022] FIG. 15 is a block diagram showing the details of the computer according to the third modality.
[023] FIG. 16 is a top view showing the vehicle's travel status during a traffic lane change.
[024] FIG. 17 is a graph showing the relationship between the distance across the vehicle's width to the dividing line and the size (amount of magnification) of the sensing region in the third mode.
[025] FIG. 18 is a flowchart showing the solid object detection method according to the third embodiment, showing the first half of the process.
[026] FIG. 19 is a flowchart showing the solid object detection method according to the third modality, showing the second half of the process.
[027] FIG. 20 is a block diagram showing the details of computer 30 according to the fourth embodiment.
[028] FIG. 21 is a schematic diagram showing the specifications of the process performed by the ground line detection line 37.
[029] FIG. 22 is a graph showing the rates of increase in the areas of the plurality of differential waveforms DWt1 to DWt4 shown in FIG. 21(b).
[030] FIG. 23 is a graph showing the relationship between the distance across the vehicle's width to the dividing line and the size (amount of magnification) of the detection region in the fourth mode.
[031] FIG. 24 is a block diagram showing the details of computer 30 according to the fifth embodiment.
[032] FIG. 25 is a graph showing the relationship between the distance across the vehicle's width to the dividing line and the size (amount of magnification) of the detection region in the fifth mode.
[033] FIG. 26 is a top view showing the vehicle's travel state when it is turning.
[034] FIG. 27 is a top view showing the travel state of the vehicle in the sixth mode.
[035] FIG. 28 is a graph showing the relationship between the distance across the vehicle's width to the dividing line and the size (amount of magnification) of the detection region in the sixth mode.
[036] FIG. 29 is a block diagram showing the details of computer 30 according to the seventh modality.
[037] FIG. 30 is a graph showing the relationship between the distance across the vehicle's width to the dividing line and the size (amount of magnification) of the sensing region in the seventh modality.
[038] FIG. 31 is a block diagram showing the details of computer 30 according to the eighth modality.
[039] FIG. 32 is a diagram for describing the relationship between the type of dividing line and the size (increased amount) of detection regions A1, A2. Detailed Description of the Invention <<First Modality>>
[040] The preferred embodiments of the present invention are described below based on the drawings. FIG. 1 is a schematic diagram of the solid object detection device 1 according to the present embodiment, showing an example of a case in which the solid object detection device 1 is installed in a vehicle V. The solid object detection device 1 shown in FIG. 1 detects solid objects (eg other vehicles, two-wheeled vehicles, etc.) traveling in an adjacent traffic lane that is adjacent to the travel lane in which vehicle V is traveling, the adjacent traffic lane being adjacent through a dividing line as an edge; solid object detection device 1 provides various information to the driver of vehicle V; and the solid object detection device 1 comprises a camera (image capture device) 10, a vehicle speed sensor 20, and a computer 30. The term "travel covered", used below, refers to a band of trip in which vehicle V can travel when there are no lane changes, which is also a region that excludes the dividing line. At the same time, the term "adjacent traffic lane" refers to a band of travel adjacent to the lane traveled through the dividing line, which is also a region that excludes the dividing line. The dividing line is a line such as white or similar lines that serve as the border between the lane traveled and the adjacent traffic lane.
[041] The camera 10 shown in FIG. 1 is coupled at a location at a height h at the rear of vehicle V, an optical axis is at an angle θ downwards from the horizontal. Camera 10 is designed to capture images of the detection region from that position. The vehicle speed sensor 20 detects the vehicle traveling speed V, for which a sensor or the like for capturing the rotation speed of wheels, for example, is applied. Based on the images captured by camera 10, computer 30 detects solid objects (eg, other vehicles, two-wheeled vehicles, etc.) located behind vehicle V.
[042] The solid object detection device 1 also has a warning device (not shown), and issues warnings to the driver of vehicle V in cases where there is a possibility of a displacement operation detected by the computer 30 contacting the vehicle V.
[043] FIG. 2 is a bird's-eye view showing the travel state of vehicle V shown in FIG. 1. The camera 10 is capable of capturing the image of a region behind vehicle V, or specifically a region including the dividing line and adjacent traffic lane, as shown in FIG. 2. Detection regions (default areas) A1, A2 for detecting solid objects such as other vehicles are set up in adjacent traffic lanes that are adjacent to the lane traveled on which vehicle V is traveling, and the computer 30 detects whether or not there are solid objects in detection regions A1, A2. Such detection regions A1, A2 are configured from their positions relative to vehicle V.
[044] FIG. 3 is a block diagram showing details of the computer 30 shown in FIG. 1. In FIG. 3, camera 10 and vehicle speed sensor 20 are also shown in order to provide a clear representation of the connection relationship.
[045] The computer 30 comprises a viewpoint conversion unit 31, a positional alignment unit (positional alignment device) 32, and a solid object detection unit (solid object detection device) 33, as shown in FIG. 3.
[046] The viewpoint conversion unit 31 inputs captured image data including the detection regions A1, A2 obtained by the image capture performed by the camera 10, and converts the viewpoint captured image data into View image data from a bird's point of view that is seen as seen from the view from a bird's point of view. The view from a bird's point of view is what would be seen from the point of view of an imaginary camera looking vertically down, for example, from the air above. This conversion of viewpoint is performed as described in Japanese Patent Publication submitted for public inspection No. 2008-219063, for example.
[047] The positional alignment unit 32 sequentially inputs the view image data under a bird's viewpoint obtained by the viewpoint conversion of the viewpoint conversion unit 31, and corresponds to the position image data of view from the point of view of a bird inserted from a different time. FIG. 4 is a top view showing a process of the positional alignment unit 32 shown in FIG. 3, where (a) shows the state of motion of vehicle V and (b) shows a description of the positional alignment.
[048] As shown in FIG. 4(a), vehicle V of the current time is positioned at V1, and vehicle V at a previous time is positioned at V2. Another vehicle V is traveling parallel to vehicle V at a position behind vehicle V, the other vehicle V from the current time is positioned at V3, and the other vehicle V from a previous time is positioned at V4. Furthermore, vehicle V moves a distance d at a time. The term "an earlier time" may refer to a time in the past from the current time for a length of time set in advance (eg, a control cycle), or it may refer to a time in the past for any length of desired time.
[049] In such a state, the viewpoint image of a PBt bird at the current time is shown in FIG. 4(b). In the view image from the point of view of a PBt bird, the white lines painted on the street are rectangular and are in a state of being viewed from above comparatively precisely. Meanwhile, the other V3 vehicle is starting to enter the picture. Similarly, in the viewpoint image of a PBt-1 bird at an earlier time, the white lines painted on the street are rectangular and are in a state of being seen from above comparatively precisely, but the other V4 vehicle is starting. to enter the image.
[050] The positional alignment unit 32 implements the positional alignment of the view images from the point of view of a bird PBt, PBt-1 described above in terms of data. At this point, the positional alignment unit 32 shifts the view image from a bird's point of view PBt-1 to an earlier time, and makes the position coincide with the view image from a bird's point of view. PBt in current time. The amount of displacement d’ is merely an amount corresponding to the travel distance d shown in FIG. 4(a), and is determined based on a signal from the vehicle speed sensor 20 and the length of time from a time prior to the current time.
[051] After positional alignment, the positional alignment unit 32 finds the differential between the view images from the viewpoint of a bird PBt, PBt-1, and generates differential image data PDt. The pixel value of the PDt differential image can be the absolute value of the difference in pixel values between the images seen from a bird's point of view PBt, PBt-1, or so as to correspond with changes in the light environment , it can be “1” when the absolute value exceeds a predetermined value and “0” when the absolute value does not exceed a predetermined value.
[052] FIG. 3 is again cited. Furthermore, the computer 30 comprises a lateral position detection unit (lateral position detection device) 34. Based on the image data captured by the camera 10, the lateral position detection unit 34 detects the position of the vehicle (specifically the vehicle side surface V) in the traffic lane covered by vehicle V, and the distance across the vehicle's width to the dividing line dividing the traffic lane. The lateral position detection unit 34 makes it possible for the computer 30 to detect such things as whether the vehicle is traveling through the center of the lane traveled or traveling closer towards either the left side or the right side.
[053] Furthermore, the solid object detection unit 3 detects the solid objects based on the PDt differential image data as shown in FIG. 4. The solid object detection unit 33 comprises a differential waveform generating unit (differential waveform generating device) 33a and a region setting unit (region setting device) 33b.
[054] FIG. 5 is a schematic diagram showing the manner in which differential waveforms are generated by the differential waveform generating unit 33a shown in FIG. 3. The differential waveform generating unit 33a generates a differential waveform DWt from the parts in the differential image PDt that are equivalent to the detection regions A1, A2, as shown in FIG. 5. At this time, the differential waveform generating unit 33a generates a differential waveform DWt along the direction in which the solid object falls by viewpoint conversion. In the example shown in FIG. 5, the description only uses detection region A1 for convenience purpose.
[055] For a specific description, the differential waveform generation unit 33a first defines a line La along the direction in which the solid object falls in the differential waveform data DWt. The differential waveform generating unit 33a then counts the number of differential pixels DP representing predetermined differentials along line La. The DP differential pixels representing predetermined differentials here are pixels that exceed a predetermined value when the pixel value of the differential waveform DWt is the absolute value of the difference between the pixel values of the view images from the point of view of a PBt bird , PBt-1, and DP differential pixels are pixels representing “1” when the pixel value of the DWt differential waveform is expressed as “0” “1”.
[056] After counting the number of DP differential pixels, the differential waveform generating unit 33a finds an intersection point CP of line La and a line L1. The differential waveform generating unit 33a correlates the CP intersection point and the counted number, determines the position of the horizontal axis (a position on the top-down axis in the image plane of FIG. 5), and determines the position from the vertical axis (a position on the axis from left to right in the image plane of FIG. 5) from the number counted.
[057] Differential waveform generating unit 33a continues to sim-milarly define a line along the direction in which the solid object falls, count the number of DP differential pixels, determine the position of the horizontal axis based on the position from the CP intersection point, and determine the position of the vertical axis from the counted number (the number of differential pixels DP). The solid object detection unit 33 generates a DW differential waveform by sequentially repeating the above process and creating a frequency distribution.
[058] As shown in FIG. 5, the lines La and Lb in the direction of falling of the solid object overlap the detection region A1 at different distances. Then, assuming that the detection region A1 is filled with DP differential pixels, line La has more DP differential pixels than line Lb. Then, when determining the position of the vertical axis from the counted number of differential pixels DP, the differential waveform generating unit 33a normalizes the position of the vertical axis based on the distance where the lines La, Lb in the direction of solid object drop and detection region A1 overlap. To provide a specific example, there are six DP differential pixels on line La in FIG. 5, and there are five DP differential pixels on the Lb line. So when determining the position of the vertical axis from the number counted in FIG. 5, the differential waveform generating unit 33a normalizes the position of the vertical axis by a method such as dividing the number counted by the overlapping distance. The values of the DWt differential waveform that correspond to the La, Lb lines in the solid object's falling direction are thus substantially equal, as shown in the DWt differential waveform.
[059] When a DWt differential waveform is generated as described above, the solid object detection unit 33 detects the solid object based on the DWt differential waveform data. At this time, the lateral position detection unit 34 first calculates an estimated velocity of the solid object by correlating the differential waveform DWt-1 of a previous time and the current differential waveform DWt. When the solid object is another vehicle V, for example, the differential waveform DW is likely to have two local maximum values because the differential pixels DP are easily obtained from the tire parts of the other vehicle V. So the relative speed of the other vehicle vehicle V relative to vehicle V can be found by finding the discrepancy between the local maximum values of the differential waveform DWt-1 from a previous time and the current differential waveform DWt. The lateral position detection unit 34 thus finds the estimated velocity of the solid object. The lateral position detection unit 34 evaluates whether or not the solid object is a solid object by evaluating whether the estimated solid object velocity is an appropriate velocity for a solid object.
[060] The region setting unit 33b sets the sizes of the detection regions A1, A2 shown in FIG. 2. The greater the distance along the vehicle width to the dividing line detected by the lateral position detection unit 34, the further out towards the vehicle width the region setting unit 33b increases the size of the detection region A1 or A2 positioned on the side where the dividing line is located.
[061] FIG. 6 is a top view showing the travel state of the vehicle shown in FIG. 1, and showing an example of a case in which vehicle V is traveling off-center on the travel track. As shown in FIG. 6, vehicle V is traveling off-center in the travel lane, and is traveling near the dividing line on the left side of the vehicle (the left side from the driver's point of view).
[062] In that case, as shown in FIG. 6, when another vehicle V is traveling a distance from the other dividing line (the dividing line on the right side from the driver's point of view), the other vehicle V will sometimes not be positioned in the detection region A1 located in the right side from the driver's point of view. Then, in the present embodiment, the region setting unit 33b augments the detection region A1 to prevent situations that would cause detection failures.
[063] FIG. 7 is a top view showing the travel state of the vehicle shown in FIG. 1, and showing an example of a case in which the region setting unit 33b increased the detection region A1. Detection region A1 is augmented by region configuration unit 33b as shown in FIG. 7. The other vehicle is thus positioned within the detection region A1, and failure to detect the other vehicle V can be prevented.
[064] FIG. 8 is a graph showing the relationship between the distance across the vehicle width Δy to the dividing line and the size of the detection region A1 (the increased amount Δyαfs).
[065] When the distance along the vehicle width Δy to the dividing line is between zero and y1, as shown in FIG. 8, the increased amount of detection region A1 is zero. When the distance across vehicle width Δy is between y1 and y2, the increased amount of detection region A1 increases in accordance with the size of the distance across vehicle width Δy. Furthermore, when the distance along the vehicle width Δy exceeds y2, the increased amount of detection region A1 is fixed at y0fs’. Thus, the reason why the increased amount of detection region A1 is fixed at the specific value of y0fs' is because when detection region A1 is increased without limits, there is a possibility that detection region A1 covers not only the track of adjacent traffic as the subsequent lanes as well.
[066] In FIG. 8, the increased amount of sensing region A1 increases comparatively in the distance interval across vehicle width Δy between y1 and y2, but this increase is not particularly limited to a proportional increase, and may be an exponential or similar increase. As is clear from FIG. 8, when the distance along the vehicle width Δy to the dividing line is short, the sensing region A1 that has been increased is then contracted.
[067] The above description only uses sensing region A1, but the same applies to sensing region A2 as well. In the example shown in FIG. 8, the detection region A1 is increased based on the distance along the vehicle width Δy from the right side surface of the vehicle (the right side surface from the driver's point of view) to the dividing line on the right side, but when the size of the sensing region A2 is varied, needless to say, the sensing region is decided based on the distance across the vehicle width larguray from the left side surface of the vehicle (the left side surface from the point driver's view) to the dividing line on the left side.
[068] Furthermore, the region setting unit 33b is configured not to vary too much detection regions A1, A2. This is because the detection regions A1, A2 are very varied, solid object detection becomes unstable, and there is a possibility that it will lead to solid object detection failures.
[069] Specifically, the region setting unit 33b is designed so that the amount varied when sensing regions A1, A2 are varied does not exceed a threshold value (a setpoint of increased or a setpoint). For a more specific description, the region setting unit 33b finds a target value for the sizes of the detection regions A1, A2 based on the graph shown in FIG. 8. The region setting unit 33b then sequentially takes the sizes of the detection regions A1, A2 closer to the target value within a range that does not exceed the threshold value.
[070] The increase threshold value (increase setpoint), which is the threshold value when the detection regions A1, A2 are increased, is set to be smaller than the contraction threshold value (setpoint), which is the threshold value when detection regions A1, A2 are contracted. The detection regions A1, A2, when contracted, are thus not contracted much, and it is possible to prevent situations in which the other vehicle V leaves the detection regions A1, A2 and fails to be detected due to the detection regions A1, A2 being very contracted.
[071] The 33b region setting unit reduces the threshold value to be smaller when a solid object is detected than when a solid object is not detected. This is because it is possible to prevent situations in which the other vehicle V being detected leaves detection regions A1, A2 and fails to be detected due to detection regions A1, A2 being too contracted.
[072] Next, the solid object detection method according to the present embodiment is described. FIG. 9 is a flowchart showing the solid object detection method according to the present embodiment.
[073] First, as shown in FIG. 9, the lateral position detection unit 34 detects the distance along the vehicle width Δy between the vehicle lateral surface V and the dividing line (S1). At that time, the lateral position detection unit 34 detects the distance along the width of the vehicle Δy based on the image data captured by the camera 10. In the present mode, because the detection regions A1, A2 are configured on the left and back on the right of vehicle V, the lateral position detection unit 34 detects a distance across the vehicle width Δy between both the left and right surfaces of vehicle V and the left and right dividing lines. For the purpose of convenience in the description below, the description only uses one detection region A1 as an example, but the same applies to the other region A2 as well.
[074] Next, the region setting unit 33b sets the target value of detection region A1 (S2). At that time, the region setting unit 33b sets the target value based on the graph data described with respect to FIG. 8. Then the solid object detection unit 33 evaluates whether or not solid object detection is currently taking place (S3).
[075] When it judges that solid object detection is taking place (S3: YES), the region setting unit 33b sets the threshold value which is the upper limit of the variation amount of detection region A1 as a first threshold value (S4). The process then transitions to step S6. When it judges that solid object detection is not taking place (S3: NO), the region setting unit 33b sets the threshold value which is the upper limit of the variation amount of detection region A1 as a second threshold value (S5 ). The process then transitions to step S6. The first limit value here is less than the second limit value. So, several changes in the A1 detection region are still prevented during solid object detection.
[076] In step S6, the solid object detection unit 33 evaluates whether or not the detection region A1 will be contracted based on the target value found in step S2 (S6). When it is judged that the detection region A1 will be contracted (S6: YES), the region setting unit 33b lowers the threshold set in steps S4 and S5 (S7). Various changes in sensing region A1 can thus be suppressed when sensing region A1 is contracted. The process then proceeds to step S8. When it is evaluated that the detection region A1 will not be contracted (S6; NO), that is, when the detection region A1 will be increased, the region setting unit 33b does not lower the threshold value set in steps S4 and S5, and the process advances to step S8.
[077] In step S8, the region setting unit 33b changes the size of the detection region A1 (S8). At that time, the region setting unit 33b increases or shrinks the size of the detection region A1 within a range that does not exceed the threshold value obtained via the process described above.
[078] Computer 30 then detects the vehicle speed based on a signal from the vehicle speed sensor 20 (S9). Then the positional alignment unit 32 detects the differential (S10). At this time, the positional alignment unit 32 generates differential image data PDt as described with respect to FIG. 4.
[079] Then, the differential waveform generating unit 33a generates a differential waveform DW (S11) in the manner described with respect to FIG. 5, based on the PDt differential image generated in step S10. The solid object detection unit 33 then calculates an estimated solid object velocity (S12) by correlating the previous time differential waveform DWt-1 and the current differential waveform DWt.
[080] The solid object detection unit 33 then evaluates whether or not the estimated velocity calculated in step S12 is a detection objective (S13). In the present embodiment, the solid object detection device 1 detects another vehicle, a two-wheeled vehicle, or the like that has the possibility of contact during a traffic lane change. Then, the solid object detection unit 33 assesses whether the estimated speed is appropriate as a speed for another vehicle, a two-wheeled vehicle, or similar in step S13.
[081] When the estimated speed is judged to be appropriate as the speed of another vehicle, a two-wheeled vehicle, or similar (S13: YES), the solid object detection unit 33 judges that the solid object indicated by the shape Differential waveform DWt is a solid object (another vehicle, two-wheeled vehicle, or similar) that could be a detection objective (S14). The process shown in FIG. 9 then ends. When it judges that the estimated speed is not appropriate as the speed of another vehicle, a two-wheeled vehicle, or similar (S13: NO), the solid object detection unit 33 evaluates that the solid object indicated by the differential waveform DWt is not a solid object that could be a detection objective, and the process shown in FIG. 9 ends.
[082] Thus, according to the solid object detection device 1 and the solid object detection method according to the present embodiment, the greater the distance along the vehicle width Δy between the vehicle position and the line divider, further outward in the direction of the vehicle width is detection region A1 or A2 positioned on the side where the divider is located; then, it is possible to prevent situations in which due to vehicle V being separated from the dividing line, for example, the detection region A1 or A2 is not configured appropriately for the adjacent vehicle, and a solid object such as another vehicle is outside the region of detection A1 or A2 and this object fails to be detected. So the accuracy of detecting solid objects can be improved.
[083] The sizes of the detection regions A1, A2 are increased by an increase threshold value and the increased detection regions A1, A2 are then contracted inward towards the vehicle width by a shrinkage threshold value less than the limit increase value; so when the detection regions A1, A2 are contracted, the detection regions A1, A2 can be prevented from changing too much, and situations such as those that would cause detection failures can still be prevented.
[084] When a solid object is detected, the threshold value is reduced to less than when a solid object is not detected. Specifically, as the shrink threshold value is reduced to less than the rise threshold value, it is possible to prevent situations such as when the sizes of the detection regions A1, A2 are too contracted during solid object detection, and the detection regions A1, A2 are extremely contracted, causing detection failures. <<Second Modality>>
[085] Next, the second embodiment of the present invention is described. The solid object detection device and the solid object detection method according to the second modality are similar to those of the first modality, but the specific configuration and process is partially different. The points of difference with the first modality are described below.
[086] FIG. 10 is a block diagram showing the details of computer 30 according to the second embodiment. In FIG. 10, camera 10 and vehicle speed sensor 20 are also shown in order to obtain a clear description of the connection relationship.
[087] The computer 30 according to the second embodiment comprises a traffic lane width detection unit (width detection device) 35 as shown in FIG. 10. The traffic lane width detection unit 35 detects the traffic lane width of the lane traveled. The traffic lane width detection unit 35 detects the traffic lane width of the street traversed on the basis of the image data captured by the camera 10. The traffic lane width detection unit 35 can also detect the lane width traffic lane width of the adjacent traffic lane and uses this width as the traffic lane width of the lane traveled. This is because traffic lane widths are essentially uniform across the street.
[088] The vehicle travel state when the traffic lane width is small is described here with respect to FIG. 11. FIG. 11 is a top view showing the vehicle travel state when the width of the traffic lane is small, and shows an example of a case in which the region setting unit 33b has increased the detection region A1. When the width of the traffic lane is small and the detection region A1 is increased in the same way as in the first mode, there are cases where the other vehicle V in an adjacent traffic lane enters the detection region A1 as shown in FIG. 11. When solid object detection is performed based on such detection region A1, the accuracy of solid object detection decreases. The same applies for detection region A2.
[089] In the second mode, the smaller the traffic lane width detected by the traffic lane width detection unit 35, the smaller the region setting unit 33b makes the amount of increase when increasing the sizes of the detection regions A1 , A2 outwards in the direction of vehicle width.
[090] FIG. 12 is a graph showing the relationship between the distance across the vehicle width Δy to the dividing line and the size (amount of increase Δyofs) of the detection region A1 in the second mode.
[091] When the distance along the vehicle width Δy is between y1 and y2, the increased amount of the detection region A1 increases according to the size of the distance along the vehicle width Δy, as shown in FIG. 12, but the increased amount is less than in the example shown in FIG. 8. Specifically, the region setting unit 33b according to the second modality is designed to prevent the detection region A1 from increasing too much, reducing the increased amount when increasing the detection regions A1, A2. This prevents the A1 detection region from being set on subsequent traffic lanes and the solid object detection accuracy from decreasing.
[092] In the second embodiment, the maximum limit y0fs’ is preferably reduced to less than in the example shown in FIG. 8. This is because it is thus possible to prevent even the detection region A1 from being configured in subsequent traffic lanes.
[093] Next, the solid object detection method according to the second modality is described. FIGs. 13 and 14 are flowcharts showing the solid object detection method according to the second embodiment.
[094] First, the traffic lane width detection unit 35 detects the traffic lane width of the lane traveled on the basis of the image data captured by the camera 10 (S21). The region setting unit 33b then sets an increased amount (S22). Specifically, the smaller the traffic lane width, the more the region setting unit 33b reduces the increased amount according to the distance along the vehicle width Δy, as shown in FIG. 12. In this process, the region configuration unit 33b preferentially lowers the maximum limit y0fs’ as well.
[095] In steps S23 to S36, the same process as steps S1 to S14 shown in FIG. 9 runs.
[096] Thus, according to the solid object detection device 2 and the solid object detection method according to the second modality, the accuracy of detecting solid objects can be improved, and situations such as those that would cause failures in detecting solid objects can be prevented (again), similar to the first modality. It is also possible to prevent situations such as those in which the detection regions A1, A2 are extremely contracted, causing detection failures.
[097] According to the second modality, the smaller the traffic lane width of the covered lane, the smaller the amount increased when the sizes of the detection regions A1, A2 are increased. Then, in cases where the width of the traffic lane is small, it is possible to prevent situations where the detection regions A1, A2 are configured not for an adjacent traffic lane, but for a subsequent traffic lane. <<Third Modality>>
[098] Next, the third embodiment of the present invention will be described. The solid object detection device and the solid object detection method according to the third modality are similar to those of the first modality, but the specific configuration and process are partially different. The points of difference with the first modality are described below.
[099] FIG. 15 is a block diagram showing the details of computer 30 according to the third embodiment. In FIG. 15, camera 10 and vehicle speed sensor 20 are also shown in order to obtain a clear description of the connection relationship.
[0100] The computer 30 according to the third embodiment comprises a traffic lane change detection unit (Traffic lane change detection device) 36 as shown in FIG. 15. The traffic lane change detection unit 36 detects the traffic lane changes by the vehicle V, calculates the extent of proximity to the dividing line on the basis of the image data obtained by the image capture performed by the camera 10, for example , and assesses whether or not vehicle V is changing traffic lanes. The traffic lane change detection unit 36 is not limited to the above process, and can evaluate traffic lane changes from the amount driven or by some other method.
[0101] Specifically, the traffic lane detection unit 36 detects that the vehicle V is changing traffic lanes when the lateral surface of the vehicle V is within a predetermined distance (eg 10 cm) from the dividing line. The traffic lane change detection unit 36 may also be designed to detect that the vehicle V is not changing traffic lanes when the side surface of the vehicle is within a predetermined distance from the dividing line, but then again separates from the line. divider by at least the predetermined distance. Furthermore, the traffic lane change detection unit 36 can judge that the traffic lane change is complete when the vehicle is separated from the dividing line by at least the predetermined distance after having changed traffic lanes (i.e. , when the vehicle is separated from the dividing line by at least the predetermined distance after having crossed the dividing line by changing lanes.
[0102] The manner in which the vehicle changes lanes of traffic is described with respect to FIG. 16. FIG. 16 is a top view showing the vehicle lane traveled when the vehicle is changing traffic lanes. As shown in FIG. 16, vehicle V is positioned in the center of the traffic lane (see symbol Va), and the vehicle then changes traffic lane and reaches position Vb. At this time, the distance across vehicle width Δy between vehicle side surface V and the dividing line is temporarily longer. Then detection region A1 is increased and there is a possibility of another vehicle V in a subsequent traffic lane entering detection region A1. In such cases, solid object detection accuracy is reduced.
[0103] In view of this, in the third embodiment, when a traffic lane change made by vehicle V is detected by the traffic lane change detection unit 36, for a certain duration of time, the region setting unit 33b reduces the increased amount when the sizes of detection regions A1, A2 are increased. Specifically, when a traffic lane change made by vehicle V is detected by the traffic lane change detection unit 36 for a certain duration of time, the region setting unit 33b reduces the increased amount Δy0fs of the detection region A1 or A2 with respect to the distance across the vehicle width Δy between the vehicle side surface V shown in FIG. 16 and the dividing line. Specifically, during a certain duration of time after the traffic lane change made by vehicle V is detected, the region setting unit 33b reduces the increased amount Δyofs of the detection region A1 or A2 with respect to the distance along the vehicle width Δy between the vehicle side surface V and the dividing line to be smaller than before the traffic lane change as shown in FIG. 17. It is thus possible to prevent situations in which the detection region A1 or A2 is temporarily greatly increased during a traffic lane change. FIG. 17 is a graph showing the relationship between distance across vehicle width and detection region size (the increased amount (y0fs) in the third mode.
[0104] Next, the solid object detection method according to the third modality is described. FIGs. 18 and 19 are flowcharts showing the solid object detection method according to the third modality.
[0105] First, based on the image data captured by the camera 10, the traffic lane change detection unit 36 calculates the extent of proximity to the dividing line and evaluates whether or not vehicle V is changing traffic lane ( S41). When it is determined that vehicle V is not changing traffic lanes (S41: NO), the process advances to step S43. When vehicle V is judged to be changing traffic lanes (S41: YES), region setting unit 33b sets the increased amount (S42). Specifically, the region shaping unit 33b reduces the increased amount in accordance with the distance along the width of the vehicle comoy, as shown in FIG. 17. In this process, the region configuration unit 33b preferentially lowers the maximum limit y0fs’ as well.
[0106] In steps S43 to S56, the same process of steps S1 to S14 shown in FIG. 9 runs.
[0107] Thus, according to the solid object detection device 3 and the solid object detection method according to the third modality, the accuracy of detecting solid objects can be improved, and situations such as those that would cause failures in detect solid objects can be prevented (again), similar to the first modality. It is also possible to prevent situations such as those in which the detection regions A1, A2 are extremely contracted, causing detection failures.
[0108] According to the third modality, when a traffic lane change made by vehicle V is detected, the amount increased to increase the sizes of detection regions A1, A2 is reduced. Then, it is possible to prevent situations in which, while the vehicle is temporarily close to the dividing line while changing traffic lanes, the detection regions A1, A2 are configured not for an adjacent traffic lane, but for a subsequent traffic lane. <<Fourth Mode>>
[0109] Next, the fourth embodiment of the present invention is described. The solid object detection device and the solid object detection method according to the fourth modality are similar to those of the first modality, but the specific configuration and process are partially different. The points of difference with the first modality are described below.
[0110] FIG. 20 is a block diagram showing the details of computer 30 according to the fourth embodiment. In FIG. 20, camera 10 and vehicle speed sensor 20 are also shown in order to obtain a clear description of the connection relationship.
[0111] The computer 30 according to the fourth embodiment comprises a ground line detection unit 37 as shown in FIG. 20. The ground line detection unit 37 detects the positions where the tires of another vehicle V traveling in an adjacent traffic lane (positions in the vehicle width direction) as ground lines. Details are described below using FIGs. 21 and 22. FIGs. 21 and 22 are diagrams for describing the method of detecting ground lines by the ground line detection unit 37.
[0112] First, the ground line detection unit 37 configures a plurality of lines L1 to Ln at different positions in the detection region A1 or A1, the lines being substantially parallel with the direction of travel of the vehicle V. For example, in the example shown in FIG. 21, the ground line detection unit 37 sets up four substantially parallel lines. The following description uses four substantially parallel lines L1 to L4 as an example, but the lines are not limited to this example, and there can be two, three, five or more parallel lines.
[0113] The ground line detection unit 37 then causes the differential waveform generation unit 33a to generate differential waveforms DWt for the configured lines L1 to L4. Specifically, after making the differential waveform generating unit 33a count the number of differential pixels DP, the ground line detection unit 37 finds the CP intersection points between lines L1 and L4 and line La ao along the direction in which the solid object falls in the DWt differential image data, and causes the differential waveform generation unit 33a to generate a DWt differential waveform for each of the lines L1 to L4 correlating the intersection points CP and the number counted. The ground line detection unit 37 can thus obtain a plurality of differential waveforms as shown in FIG. 21(b). In FIG. 21(b), the differential waveform DWt1 is based on the substantially parallel line L1, the differential waveform DWt2 is based on the substantially parallel line L2, the differential waveform DWt3 is based on the substantially parallel line la L3, and the DWt4 differential waveform is based on the substantially parallel line L4.
[0114] With respect to the plurality of differential waveforms DWt1 to DWt4, the differential waveform DWt3 based on the substantially parallel line L3 near vehicle V has more of a frequency increase tendency than the differential waveforms DWt1 , DWt2 based on the substantially parallel lines L1, L2 farthest away from vehicle V. This is because the other vehicle V is a solid object, and the other vehicle V then certainly extends for an infinite distance within the differential image PD. However, the DWt3 differential waveform and the DWt4 differential waveform have the same frequency. This is because the substantially parallel lines L3, L4 both overlap the other vehicle in the PDt differential image. Specifically, this is because there are no DP differential pixels between the substantially parallel line L3 and the line L4.
[0115] The ground line detection unit 37 evaluates the ground line Lt of the other vehicle V from the shape changes of the plurality of differential waveforms DWt1 to DWt4 described above. In the case of the example shown in FIG. 21(b), the ground line detection unit 37 evaluates the substantially parallel line L3 to be the ground line Lt. Specifically, the ground line Lt is evaluated from the rates of increase in region shown in FIG. 22. FIG. 22 is a graph showing the rates of increase in the areas of the plurality of differential waveforms DWt1 to DWt4 shown in FIG. 21(b). The ground line detection unit 37 refers to the rates of increase in the farthest region of the substantially parallel line between the calculated areas towards the nearest substantially parallel line, as shown in FIG. 22. The DWt2 differential waveform region exhibits a rate of increase that is constant relative to the DWt2 differential waveform region. The differential waveform region DWt4 and the differential waveform region DWt3 are the same, and the rate of increase is a predetermined value or less. This is because there are no pixel differential DP between the substantially parallel line L3 and the line L4 as described above. Specifically, it can be estimated that there is no solid object (eg tires of an adjacent vehicle) between the substantially parallel line L3 and L4. Consequently, the ground line detection unit 37 detects the substantially parallel line L3 as the ground line Lt of the other vehicle V.
[0116] With respect to FIG. 20, the region setting unit 33b according to the fourth embodiment increases the sizes of the detection regions A1, A2 based on the distance across the vehicle width Δy between the vehicle side surface V and the dividing line, similar to first modality. Furthermore, in the fourth mode, the region setting unit 33b changes the increased amount Δy0fs when increasing the detection regions A1, A2 outward in the direction of the vehicle width, based on the ground line Lt of the other vehicle V detected by the unit of ground line detection 37. Specifically, the shorter the distance along the vehicle width from the side surface of the vehicle V1 to the ground line Lt of the other vehicle V, the more the region setting unit 33b reduces the amount increased Δy0fs when increasing the sizes of detection regions A1, A2, as shown in FIG. 23.
[0117] Thus, according to the solid object detection device 4 and the solid object detection method according to the fourth modality, the accuracy of detecting solid objects can be improved, and situations such as those that would cause failures in detect solid objects can be prevented (again), similar to the first modality. It is also possible to prevent situations such as those in which the detection regions A1, A2 are extremely contracted, causing detection failures.
[0118] According to the fourth mode, the ground line Lt of the other vehicle V traveling in an adjacent traffic lane is detected, and the shorter the distance from the side surface of the vehicle V to the ground line Lt, the more the amount changed Δy0fs is reduced when the sizes of detection regions A1, A2 are increased. In cases such as when another vehicle V can be properly detected in an adjacent traffic lane even when the distance in the vehicle width direction from the side surface of vehicle V to the adjacent vehicle is short and the amount increased where the sizes of the Detection regions A1, A2 are increased is minimized, it is thus possible in the fourth embodiment to minimize the altered amount Δyofs when the sizes of the detection regions A1, A2 are increased, and thus effectively prevent the detection regions A1, A2 from being set to subsequent or off-street traffic lanes, and to prevent other vehicles traveling in subsequent traffic lanes, off-street grass, and the like from being erroneously detected as adjacent vehicles. <<Fifth Mode>>
[0119] Next, the fifth embodiment of the present invention is described. The solid object detection device and the solid object detection method according to the fifth modality are similar to those of the first modality, but the specific configuration and process are partially different. The points of difference with the first modality are described below.
[0120] FIG. 24 is a block diagram showing the details of computer 30 according to the fifth embodiment. In FIG. 24, camera 10 and vehicle speed sensor 20 are also shown in order to provide a clear description of the connection relationship.
[0121] The computer 30 according to the fifth embodiment comprises a turn state determining unit (turn state determining device) 38 as shown in FIG. 24. The turning state determining unit 38 determines whether or not vehicle V is in a turning state based on the vehicle speed detected by the vehicle speed sensor 20 or the amount of steering detected by a steering angle sensor. steering (not shown), and also detects the vehicle's turning radius V when it is in a turning state. The method by which the turn state is detected by the turn state determining unit 38 is not particularly limited; the turning state of vehicle V1 can be detected based on the detection results of a lateral acceleration sensor, for example, or the turning state of vehicle V1 can be detected by predicting the shape of the street based on the image captured by the camera 10. The turning state of vehicle V1 can also be detected by specifying the street on which vehicle V is traveling according to map information or current position information of vehicle V according to a navigation system or similar .
[0122] The region setting unit 33b according to the fifth mode increases the detection regions A1, A2 based on the distance along the vehicle width Δy between the vehicle side surface V and the dividing line, similar to the first mode, and in the fifth mode, the region setting unit 33b also changes the increased amount Δy0fs when the sizes of the detection regions A1, A2 are increased in accordance with vehicle turning radius V, when vehicle V is determined by turn state determining unit 38 as being in a turn state. Specifically, the smaller the vehicle turning radius V, the smaller the region setting unit 33b makes the increased amount Δy0fs of the detection regions A1, A2.
[0123] FIG. 25 is a top view showing the vehicle's travel state when it is turning, and also shows an example of a case in which the region setting unit 33b has increased the detection region A1. In this scenario where vehicle V is turning through a curve or similar, as shown in FIG. 25, the smaller the vehicle turning radius V, the easier the detection regions A1, A2 on the outer sides of the turning direction are configured within the subsequent traffic lanes, and when the detection region A1 is increased outward in the direction of vehicle width in the same way as in the first mode, there are cases where another vehicle V in a subsequent traffic lane enters the detection region A1 and this other vehicle V is erroneously detected as an adjacent vehicle.
[0124] In view of this, when the vehicle V is determined by the turning state determination unit 38 as turning, the smaller the turning radius of the vehicle V, the smaller the region setting unit 33b makes the increased amount Δyods of the regions detection rate A1, A2 with respect to the distance across the vehicle width Δy, as shown in FIG. 26. It is thus possible to prevent detection regions A1, A2 from being set up in subsequent traffic lanes even when vehicle V is turning through a curve, and as a result, other vehicles V in subsequent traffic lanes can be effectively impeded from being mistakenly detected as adjacent vehicles.
[0125] In addition to the configuration described above, the region configuration unit 33b in the fifth mode can also employ a configuration to reduce the increased amount Δy0fs of the detection regions A1, A2 based on the width of the traffic lane traveled by the vehicle V , as described in the second modality. In that case, the region setting unit 33b can compare the increased amount Δy0fs of the detection regions A1, A2 determined on the basis of the turning radius and the increased amount Δy0ds of the detection regions A1, A2 determined on the basis of the track width of traffic, select the smallest increased amount Δy0fs, and increase the sizes of detection regions A1, A2 outside the vehicle width direction.
[0126] Thus, according to the solid object detection device 5 and the solid object detection method according to the fifth modality, the accuracy of detecting solid objects can be improved, and situations such as those that would cause failures to detect solid objects can be prevented (again), similar to the first modality. It is also possible to prevent situations such as those in which the detection regions A1, A2 are extremely contracted, causing detection failures.
[0127] In the fifth mode, when vehicle V is determined by the turning state determination unit 38 as turning, the smaller the turning radius of vehicle V, the smaller the increased amount Δyofs of the detection regions A1, A2 in relation to the distance along the vehicle width Δy as shown in FIG. 26, wherein the detection regions A1, A2 can be prevented from being set up in subsequent traffic lanes even when vehicle V is turning through a curve shown in FIG. 25, and as a result, other vehicles V in subsequent traffic lanes can be effectively prevented from being mistakenly detected as adjacent vehicles.
[0128] Furthermore, according to the fifth modality, when vehicle V is changing traffic lane, the increased amount Δy0fs of detection regions A1, A2 can be suppressed based on the turning state of vehicle V. For example, when vehicle V1 is traveling on a straight road and when vehicle V is detected as turning, it can be appreciated that the smaller the turning radius, the greater the possibility that vehicle V is making a change of traffic lane. In view of this, considering that the smaller the turning radius of vehicle V, the greater the possibility that vehicle V is making a change of traffic lane, and reducing the increased amount Δy0fs of the detection regions A1, A2, is It is possible to effectively prevent detection regions A1, A2 from being configured in subsequent traffic lanes during a traffic lane change and prevent other vehicles traveling in subsequent traffic lanes from being erroneously detected as adjacent vehicles. <<Sixth Mode>>
[0129] Next, the sixth embodiment of the present invention is described. The solid object detection device and the solid object detection method according to the sixth modality are similar to those of the first modality, but the specific configuration and process are partially different. The points of difference with the first modality are described below.
[0130] FIG. 27 is a top view showing the travel state of the vehicle in the sixth mode, and also shows an example of a case in which the region setting unit 33b has increased the detection region A1. In the sixth embodiment, the region setting unit 33b shifts the detection regions A1, A2 outward in the vehicle width direction based on the distance along the vehicle width Δy between the vehicle side surface V and the dividing line, and increases the detection regions A1, A2 outward in the vehicle width direction based on the distance across the vehicle width Δy between the vehicle side surface V and the dividing line. Details are described below.
[0131] FIG. 28(A) is a graph showing the relationship between the distance along the vehicle width Δy to the dividing line and the amount of movement (the amount of displacement y0fs1) when detection regions A1, A2 are shifted outward in the vehicle width direction, and FIG. 28(B) is a graph showing the relationship between the distance across the vehicle width Δy to the dividing line and the increased amount (the increased amount Δy0fs2) when detection regions A1, A2 are increased outward in the direction of vehicle width.
[0132] Specifically, when the distance along the vehicle width Δy between the vehicle side surface V and the dividing lines is less than y3, as shown in FIG. 28(A), the region setting unit 33b leaves the detection regions A1, A2 unchanged, and when the distance across the vehicle width Δy is y3 or greater and less than y4, the region setting unit 33b shifts detection regions A1, A2 outward in the vehicle width direction according to the distance along the vehicle width Δy. When the distance along the vehicle width Δy between the vehicle side surface V1 and the dividing line is y3 or greater and less than y4, the detection regions A1, A2 are not increased outward in the direction of the vehicle width.
[0133] When the distance along the vehicle width Δy is y4 or greater, the region setting unit 33b shifts the detection regions A1, A2 outward towards the vehicle width by a predetermined amount of displacement y0fs1' and increases sensing regions A1, A2 outward in the direction of vehicle width, as shown in FIG. 28(A). Specifically, when the distance across the vehicle width Δy is y4 or greater and less than y5, the detection regions A1, A2 are increased outward in the direction of the vehicle width according to the distance along the vehicle width Δy, as shown in FIG. 28(B), and when the distance along the vehicle width Δy is y5 or greater, the detection regions A1, A2 are increased outward in the vehicle width direction by a predetermined increased amount Δy0fs2’.
[0134] In a scenario in which vehicle V is separated from the dividing line, for example, the region setting unit 33b thus shifts the detection regions A1, A2 outward in the direction of the vehicle width according to distance across vehicle width Δy when the distance across vehicle width Δy between vehicle side surface V and the dividing line is y3 or more, and when distance across vehicle width Δy then is y4 or more , the region setting unit 33b for shifting the detection regions A1, A2 outwardly in the vehicle width direction and instead increases the detection regions A1, A2 outwardly in the vehicle width direction. The region setting unit 33b then increases the detection regions A1, A2 outward in the direction of the vehicle width according to the distance across the vehicle width Δy until it reaches y5, and at the point in time when distance across vehicle width Δy reaches y5, region setting unit 33b also stops increasing sensing regions A1, A2 outward in the direction of vehicle width.
[0135] In this scenario where the vehicle V approaches the dividing line, when the distance along the vehicle width Δy is less than y5, the region setting unit 33b narrows the sizes of the increased detection regions A1, A2 inward to the width of the vehicle. When the distance along the vehicle width Δy is then less than y4, the region setting unit 33b stops contracting the detection regions A1, A2 inwards towards the vehicle width and instead shifts the regions of detection A1, A2 inward in the direction of the vehicle width according to the distance along the vehicle width Δy. When the distance along the vehicle width Δy is less than y3, the region setting unit 33b then stops shifting the detection regions A1, A2 inwardly in the vehicle width direction.
[0136] Furthermore, in the sixth mode, the region setting unit 33b has threshold values (increase setpoints, setpoints) for the change amounts according to the various conditions when vehicle V is changing traffic lanes, when a solid object is captured, and during normal times (when vehicle V is traveling straight and when a solid object is not detected), both for the amount of displacement y0fs1 and the increased amount Δy0fs2 of the detection regions A1, A2. The region setting unit 33b then gradually shifts the detection regions A1, A2 outwards towards the width of the vehicle within a range that does not exceed the threshold values of Δy0fs1 and Δy0fs2 according to the various conditions, and gradually increases them. sizes of detection regions A1, A2 outward in the direction of the vehicle width. The limit values for each condition described above are applied irrespective of the vehicle's turning state or the traffic lane width of the lane traveled.
[0137] Thus, according to the solid object detection device 6 and the solid object detection method according to the sixth modality, the accuracy of detecting solid objects can be improved, and situations such as those that would cause failures in detect solid objects can be prevented (again), similar to the first modality. It is also possible to prevent situations such as those in which the detection regions A1, A2 are extremely contracted, causing detection failures.
[0138] According to the sixth modality, the detection regions A1, A2 are shifted outwards in the direction of the vehicle width based on the distance along the vehicle width Δy between the vehicle side surface V and the dividing line, and when the distance across the vehicle width Δy then reaches a predetermined value or greater and the amount shifted from the detection regions A1, A2 reaches a predetermined amount, the following effects can be achieved by increasing the detection regions A1, A2 outward in the direction of vehicle width instead of shifting detection regions A1, A2.
[0139] Specifically, when a solid object (another vehicle or similar) traveling in an adjacent traffic lane is detected in detection region A1 or A2, the farther outward in the direction of vehicle width is detection region A1 or A2 is positioned, the more there is a tendency for the solid object's moving speed to be calculated as being faster than the actual moving speed, and the further inward towards the vehicle width the detection region A1 or A2 is positioned, but there is a tendency for the solid object's movement speed to be calculated to be slower than the actual movement speed. So, when the detection regions A1, A2 are greatly increased outward in the direction of the vehicle width based on the distance across the vehicle width Δy, there are cases where there will be no uniformity in the motion speed of the detected solid object and the Accuracy of detecting the solid object will decrease depending on the detected position of the solid object in the detection region A1 or A2. In the present embodiment, when vehicle V is separated from the dividing line, such non-uniformity in the results of detecting the solid object movement speed can be minimized and the solid object (adjacent vehicle) can be properly detected by moving the detection regions A1, A2 outward in the vehicle width direction based on the distance across the vehicle width Δy between the vehicle side surface V and the dividing line.
[0140] When the detection regions A1, A2 are shifted too far out in the vehicle width direction, there are cases where a two-wheeled vehicle or the like traveling in a position close to vehicle V1 in the vehicle width direction does not enter in the detection region A1 or A2, and that two-wheeled vehicle cannot be detected. However, in the present embodiment, when the amount of displacement where the sensing regions A1, A2 are shifted outwards in the direction of the vehicle width is a predetermined amount or greater, such problems can be effectively solved by increasing the sensing regions A1 , A2 outward in the vehicle width direction instead of shifting detection regions A1, A2. <<Seventh Mode>>
[0141] Next, the seventh embodiment of the present invention is described. The solid object detection device and the solid object detection method according to the seventh modality are similar to those of the first modality, but the specific configuration and process are partially different. The points of difference with the first modality are described below.
[0142] FIG. 29 is a block diagram showing the details of computer 30 according to the seventh modality. In FIG. 29, the camera 10 and the vehicle speed sensor 20 are also shown in order to get a clear description of the connection relationship.
[0143] The computer 30 according to the seventh embodiment comprises a foreign substance detection unit (foreign substance detection device) 39 as shown in FIG. 29. Based on the image captured by the camera 10, the foreign substance detection unit 39 detects foreign substances such as rainwater or water stains adhering to the lens. For example, the foreign substance detection unit 39 detects the amount of rainwater adhering to the lens by detecting the operating force of a wiper or a rainwater sensor that detects the amount of rainwater adhering to the lens by radiation of infrared light at the lens and detecting the amount by which the radiated infrared light is attenuated by rainwater; and the foreign substance detection unit 39 outputs the detected amount of rainwater to the region setting unit 33b.
[0144] The foreign substance detection unit 39 is not limited to detecting rainwater, and can, for example, also detect water stains or slime adhering to the lens. For example, by extracting the edge of a photographed subject from the captured image and evaluating the sharpness of the image from the characteristics of the extracted edge, the foreign substance detection unit 39 can assess the extent to which rain or similar stains are adhering to the lens and the lens is cloudy (a thin white film is formed on the lens surface), and can thus detect the amount of foreign substances adhering to the lens. Otherwise, when an edge of similar intensity is detected in the same region in the captured image over a certain duration of time, the foreign substance detection unit 39 can assess which foreign substances are adhered to that region and detect the amount of foreign substances adhering to the lens.
[0145] The region setting unit 33b according to the seventh mode increases the detection regions A1, A2 outward in the direction of the vehicle width based on the distance along the vehicle width Δy between the vehicle side surface V and the dividing line similar to the first modality, and in the seventh modality, it also varies the increased amount Δy0fs when increasing the detection regions A1, A2 based on the amount of foreign substances detected by the foreign substance detection unit 39. Specifically, the higher the amount of foreign substances adhering to the lens, the smaller the region setting unit 33b makes the amount Δyofs increased when increasing the detection regions A1, A2.
[0146] FIG. 30 is a graph showing the relationship between the amount of foreign substances adhering to the lens and the sizes of the detection regions A1, A2 (the increased amount Δy0fs). The greater the amount of foreign substances adhering to the lens, the smaller the region setting unit 33b makes the amount increased Δy0fs when increasing the detection regions A1, A2 as shown in FIG. 30. Specifically, when the amount of foreign substances adhering to the lens is less than q1, the region setting unit 33b keeps the increased amount Δy0fs of the detection regions A1, A2 at the initial increased amount determined based on the distance along the vehicle width Δy. When the amount of foreign substances adhering to the lens is q1 or greater and less than q2, the greater the amount of foreign substances adhering to the lens, the smaller the region setting unit 33b makes the increased amount Δy0fs of the detection regions A1, A2. When the amount of foreign substances adhering to the lens exceeds q2, the region setting unit 33b sets the increased amount of detection regions A1, A2 to the smallest predetermined value 'y0fs'.
[0147] Thus, according to the solid object detection device 7 and the solid object detection method according to the seventh modality, the accuracy of detecting solid objects can be improved, and situations such as those would cause failures in detect solid objects can be prevented (again), similar to the first modality. It is also possible to prevent situations such as those in which the detection regions A1, A2 are extremely contracted, causing detection failures.
[0148] According to the seventh modality, the amount of foreign substances adhering to the lens of the camera 10 is detected, and the greater the amount of foreign substances, the smaller the increased amount Δyofs of the detection regions A1, A2. The greater the amount of foreign substances adhering to the lens, the more the luminous flux from the photographed subject is blocked and irregularly reflected by the foreign substances, and as a result, there are cases where the dividing line image captured by camera 10 is distorted or cloudy, and the accuracy of detecting the dividing line is reduced. So, in the present embodiment, when a large amount of foreign substances adhered to the lens, the increased amount Δy0fs of the detection regions A1, A2 is reduced, whereby it is possible to effectively prevent the increased amount of the detection regions A1, A2 be too large due to an error in detecting the distance across the vehicle width Δy between the vehicle side surface V and the dividing line, and effectively prevent other vehicles from traveling in subsequent traffic lanes, grass off the street, and the like are mistakenly detected as adjacent vehicles. <<Eighth Mode>>
[0149] Next, the eighth embodiment of the present invention is described. The solid object detection device and the solid object detection method according to the eighth modality are similar to those of the first modality, but the specific configuration and process are partially different. The points of difference with the first modality are described below.
[0150] FIG. 31 is a block diagram showing the details of computer 30 according to the eighth modality. In FIG. 31, camera 10 and vehicle speed sensor 20 are also shown in order to obtain a clear description of the connection relationship.
[0151] The computer 30 according to the eighth embodiment comprises a dividing line type specifying unit (dividing line type specifying device) 40 as shown in FIG. 31. The dividing line type specification unit 40 specifies the dividing line type based on the image captured by the camera 10. The method for specifying the dividing line type is not particularly limited, but the line type specification unit divider 40 can, for example, specify the divider line type by performing pattern matching on the divider line captured by the camera 10. Otherwise, divider line type specification unit 40 can specify divider line type from the information by specifying the street (traffic lane) in which vehicle V is traveling, according to the map information or the current position information of vehicle V in a navigation system or the like.
[0152] The region setting unit 33b according to the eighth mode increases the detection regions A1, A2 outward in the direction of the vehicle width based on the distance along the vehicle width Δy between the vehicle side surface V and the dividing line similar to the first mode, and in the eighth mode, the region setting unit 33b also varies the increased amount Δy0fs when increasing the detection regions A1, A2 based on the type of dividing line specified by the type specification unit of dividing line 40.
[0153] FIG. 32 is a diagram showing the relationship between the type of dividing line and the size of detection regions A1, A2. The smaller the possibility that the specified dividing line is a dividing line dividing the lane traveled by vehicle V and an adjacent traffic lane, the smaller the region setting unit 33b makes the increased amount Δy0fs of the detection regions A1, A2. In the example shown in FIG. 32, for example, the region setting unit 33b is capable of specifying four types of dividing lines: a white dashed line, a solid white line, a yellow line, and a multiline. In that case, when the dividing line is a white dashed line, the region setting unit 33b judges from these four types of dividing lines that the possibility is greater that the region adjacent to the lane traveled by vehicle V is a traffic lane adjacent, and increases the increased amount Δyofs of detection regions A1, A2 by the greatest amount among these four types of dividing lines. When a dividing line is a solid white line, the region setting unit 33b judges that there is a possibility that the region close to the lane traveled by vehicle V is an adjacent traffic lane, and sets the increased amount Δy0fs of the detection regions A1, A2 such that the increased amount Δy0fs is less than that of the white dashed line, but greater than that of other dividing lines. When the dividing line is a yellow line, the region setting unit 33b judges that the possibility is low that the region close to the lane traveled by vehicle V is an adjacent traffic lane, and reduces the increased amount Δy0fs of detected regions A1 , A2. When the dividing line is a multiple line, the region setting unit 33b judges that the possibility is smaller that the region close to the lane traveled by vehicle V is an adjacent traffic lane, and reduces the increased amount Δy0fs of the detection regions A1, A2 for the greatest amount among these four types of dividing lines.
[0154] Thus, according to the solid object detection device 8 and the solid object detection method according to the eighth modality, the accuracy of detecting solid objects can be improved, and situations such as those that would cause failures in detect solid objects can be prevented (again), similar to the first modality. It is also possible to prevent situations such as those in which the detection regions A1, A2 are extremely contracted, causing detection failures.
[0155] According to the eighth modality, the type of dividing line is specified, and a different augmented amount Δyofs of the detection regions A1, A2 is used based on the specified type of dividing line. For example, when the dividing line type is a yellow line or a multiple line, it is possible that the region close to the lane covered by vehicle V is the shoulder, off the street, or an arrival traffic lane, in which case there is a greater possibility that grass, noise, or the like off-street is detected when detection regions A1, A2 are increased, and the accuracy of detecting solid objects (other vehicles and the like) decreases. In the present embodiment, when the dividing line type is a yellow line or a multiple line, for example, the detection of grass, noise, or the like off-street can be effectively suppressed by suppressing the increase in detection regions A1, A2 . When the dividing line type is a dashed white line or a solid white line, the region near the lane traveled by vehicle V is possibly an adjacent traffic lane, in which case a solid object (another vehicle V) is in the adjacent traffic lane can be properly detected by appropriately increasing the detection regions A1, A2 according to the distance along the width of the vehicle Δy.
[0156] The present invention has been described above based on the embodiments, but the present invention is not limited to the above embodiments, and changes can be added and the modalities can be combined within a range that does not deviate from the scope of the present invention.
[0157] For example, in the third modality described above, a configuration was exemplified in which the increased amount Δy0fs when increasing the sizes of the detection regions A1, A2 was reduced for a certain length of time in cases where a change of traffic lane made by vehicle V has been detected by the traffic lane change detection unit 36, but a possible addition to this setting is one in which the speed of movement of vehicle V in the direction of vehicle width is calculated when a shift of traffic lane made by vehicle V is detected, and the greater the speed of movement of vehicle V in the direction of vehicle width, the smaller the increased amount Δyofs when increasing the sizes of detection regions A1, A2. In this case, the greater the movement speed of vehicle V in the direction of the vehicle's width, the greater the possibility of vehicle V changing traffic lanes that can be evaluated. Then, by making the increased amount Δy0fs of the detection regions A1, A2 smaller than the highest vehicle movement speed V in the direction of the vehicle width, it is possible to effectively prevent the detection regions A1, A2 from being greatly increased and that the detection regions A1, A2 are configured in subsequent traffic lanes due to the vehicle V changing traffic lanes. The method for calculating the vehicle movement speed V in the vehicle width direction is not particularly limited, and the region setting unit 33b can calculate the vehicle movement speed V in the vehicle width direction, or based on the change over time in the distance across the vehicle width Δy from the vehicle side surface V to the dividing line as detected by the lateral position detection unit 34, or using a lateral acceleration sensor (not shown).
[0158] In the sixth embodiment described above, a configuration was exemplified in which after the detection regions A1, A2 are shifted outwards in the direction of the vehicle width based on the distance along the vehicle width Δy from the vehicle side surface V up to the dividing line, the sizes of the detection regions A1, A2 are increased outwards in the direction of the vehicle width, but the setting is not limited as such, and another possible option is that the sizes of the detection regions A1, A2 be increased outward in the vehicle width direction, while the detection regions A1, A2 are simultaneously shifted outward in the vehicle width direction, for example.
[0159] Furthermore, in the modalities described above, the vehicle speed V is evaluated based on a signal from the vehicle speed sensor 20, but evaluating the vehicle speed is not limited as such, and the vehicle speed can be estimated from a plurality of images of different times. In this case, the vehicle speed sensor is unnecessary, and configuration can be simplified. Vehicle behavior can also be evaluated solely from the images.
[0160] Additionally, in the above modalities, an image captured from the current time and an image from a previous time are converted into views from a bird's point of view, a differential image PDt is generated by combining the views from the point of view. converted bird views, and the generated differential image PDt is evaluated along the direction of fall (the direction in which the solid object falls when the captured images are converted to views from a bird's point of view) to generate a shape DWt differential waveform, but the present invention is not limited as such. In another possible option, for example, only the image from a previous time is converted to a bird's viewpoint view, the bird's viewpoint view is converted to an equivalent of the captured image after being positionally aligned, a differential image is generated from that image and a current time image., and a DWt differential waveform is generated by evaluating the generated differential image along a direction equivalent to the direction of fall (i.e., a direction obtained by converting the falling direction to a direction in the captured image). Specifically, if the real-time image and the previous-time image are positionally aligned, a PDt differential image is generated from the differential of the two positionally aligned images, and the PDt differential image can be evaluated along the direction of fall of solid object when the image is converted to a bird's-eye view; it is not absolutely necessary to generate a view from a bird's point of view.
[0161] Solid object detection devices 1 to 3 according to the present embodiments detect a solid object based on the differential waveform DWt, but are not limited as such, and can, for example, detect a solid object using an optical stream or an image template. Furthermore, detecting the solid object is limited using the DWt differential waveform, and the PDt differential image can be used. Furthermore, when the sizes of the detection regions A1, A2 are increased, the sizes of the detection regions A1, A2 themselves are changed in the present embodiment, but the present invention is not limited as such and the enhancement areas separated from the regions detection A1, A2 can be configured.
[0162] Furthermore, the positional alignment unit 32 in this modality positionally aligns the positions of the images from the point of view of a bird of different times in a view from the point of view of a bird, but this process of "positional alignment ” can be performed with an accuracy corresponding to the type of detection objective or the required detection accuracy. For example, rigid positional alignment can be performed based on the same time or the same position, or vague positional alignment can be performed just enough to perceive coordinates of the images from a bird's point of view.
[0163] List of Symbols 1-3 - Travel distance detection devices 10 - camera (image capture device) 20 - vehicle speed sensor 30 - computer 31 - viewpoint conversion unit 32 - unit of positional alignment 33 - solid object sensing unit (solid object sensing device) 34 a - differential waveform generating unit 35 b - region setting unit (region setting device) 36 - sensing unit lateral position (lateral position detection device) 37 - traffic lane width detection unit (width detection device) 38 - traffic lane change detection unit (traffic lane change detection device) 39 - ground line detection unit (ground line detection device) 40 - turn state determination unit (turn state determination device 41 - substance detection unit a foreign (foreign substance detection device) 42 - dividing line type specification unit (dividing line type specification device) a - angle of view A1, A2 - detection regions CP - intersection point DP - differential pixels DWt, DWt' - differential waveforms La, Lb - lines along the direction of fall of solid object PB - image from a bird's point of view PD - differential image V - vehicle, other vehicle Ay - distance across the width of the vehicle
权利要求:
Claims (12)
[0001]
1. Solid object detection device (1) to detect a solid object traveling in an adjacent traffic lane that is adjacent through a dividing line serving as a boundary to the traffic lane traveled by a vehicle (V), the device a solid object detection device comprising: image capture device (10) adapted to capture an image including the dividing line and a predetermined region (A1, A2) of the adjacent traffic lane, the image capture device installed in the vehicle; solid object evaluation device (33) adapted to evaluate whether or not a solid object is present in the predetermined region image captured by the image capture device (10); lateral position detection device (34) adapted to detect, from the image captured by the image capture device (10), the distance along the width of the vehicle (Δy) between the dividing line and the position of the vehicle on the track traveled by the vehicle (V); traffic lane change detection device (36) for detecting a traffic lane change made by the vehicle; CHARACTERIZED by the fact that: the region setting device (33b) adapted to increase the size of the predetermined region positioned on the side where the dividing line is located farthest out of the vehicle width direction correspondingly with respect to the increase in distance to along the vehicle width to the dividing line detected by the lateral position detection device (34) such that the greater the distance of the vehicle width detected from the dividing line, the further out in the direction along the vehicle width the device of region adjustment increases the size of the predetermined region; and the amount by which the size of the predetermined region is increased outward in the direction of the vehicle width being reduced by the region setting device, for a certain period of time, when a traffic lane change made by the vehicle is detected by the traffic lane change-dance detection device (36).
[0002]
2. Solid object detection device (1), according to claim 1, CHARACTERIZED by the fact that the region setting device calculates the speed of movement of the vehicle in the vehicle width direction when a traffic lane changes made by the vehicle is detected by the traffic lane change detection device; and causes the amount by which the size of the predetermined region is increased outwardly in the vehicle width direction to be correspondingly reduced with respect to the increase in movement speed in the vehicle width direction.
[0003]
3. Solid object detection device (1), according to claim 1 or 2, CHARACTERIZED in that it further comprises a width detection device (35) for detecting the width of the traffic lane of the lane covered by the vehicle (V) or the adjacent traffic lane; the amount by which the size of the predetermined region is increased outwards towards the width of the vehicle being reduced by the region shaping device correspondingly with respect to the decrease in the width of the traffic lane detected by the width sensing device.
[0004]
4. Solid object detection device (1) according to any one of claims 1 to 3, CHARACTERIZED by the fact that when the region setting device increases the size of the predetermined region, the region setting device increases the the predetermined region in increments of a predetermined enlargement setpoint through multiple iterations of a process, and when the region shaping device returns the enlarged predetermined region to the original size, the region shaping device contracts the predetermined region to within the vehicle width direction in increments of a setpoint less than the setpoint of increase through multiple iterations of a process.
[0005]
5. Solid object detection device (1) according to any one of claims 1 to 4, CHARACTERIZED by the fact that when a solid object is being detected, the region setting device makes the prescribed value smaller than when a solid object is not being detected.
[0006]
6. Solid object detection device (1) according to any one of claims 1 to 5, CHARACTERIZED in that it further comprises a ground line detection device for detecting a ground line of a solid object traveling on the adjacent traffic lane; the amount by which the size of the predetermined region is increased outwardly in the vehicle width direction being reduced by the region setting device correspondingly with respect to the decrease in distance in the vehicle width direction from the vehicle to the ground line.
[0007]
7. Solid object detection device (1), according to any one of claims 1 to 6, CHARACTERIZED by the fact that it further comprises a turning state determination device (38) to detect the vehicle turning state; the amount by which the size of the predetermined region is increased outwardly in the direction of the vehicle width being reduced by the region shaping device correspondingly with respect to the decrease in vehicle turning radius detected by the turn state determining device.
[0008]
8. Solid object detection device (1), according to any one of claims 1 to 7, CHARACTERIZED by the fact that the distance along the width of the vehicle is a predetermined value or greater, the device for setting region moves the predetermined region outward in the vehicle width direction and increases the size of the predetermined region outward in the vehicle width direction.
[0009]
9. Solid object detection device (1) according to claim 8, CHARACTERIZED by the fact that the region setting device increases the size of the predetermined region outward in the direction of the vehicle width after moving the predetermined region to out in the direction of vehicle width.
[0010]
10. Solid object detection device (1), according to any one of claims 1 to 9, CHARACTERIZED by the fact that it further comprises a foreign substance detection device (39) for detecting foreign substances adhering to a lens of the device of image capture; the amount by which the size of the predetermined region is increased outwards towards the width of the vehicle being reduced by the region shaping device correspondingly with respect to an increase in the amount of foreign substances detected by the foreign substance detection device.
[0011]
11. Solid object detection device (1) according to any one of claims 1 to 10, CHARACTERIZED by the fact that it further comprises a dividing line type specification device (40) to specify the type of dividing line; the region setting device causes the amount by which the predetermined region size is increased outward in the direction of the vehicle width to be varied based on the dividing line type specified by the dividing line type specifying device.
[0012]
12. Solid object detection method to detect a solid object traveling in an adjacent traffic lane that is adjacent through a dividing line serving as a boundary for the traffic lane traveled by a vehicle, the object detection method solid comprising: an image capture step of capturing an image including the dividing line and a predetermined region of the adjacent traffic lane from the vehicle; a solid object evaluation step to assess whether or not a solid object is present in the predetermined region image captured by the image capture device; a lateral position detection step for detecting the distance along the width of the vehicle between the dividing line and the position of the vehicle on the lane traveled by the vehicle from the image obtained by capturing the image in the image capturing step; a traffic lane change detection step for detecting a traffic lane change made by the vehicle; CHARACTERIZED by the fact that: a region setting step to cause the amount by which the predetermined region size is increased outward in the direction of the vehicle width to correspondingly increase with respect to an increase in distance across the vehicle width to the divider line detected in the lateral position detection step, such that the greater the distance across the vehicle width detected to the divider line, the further out towards the vehicle width the region setting device increases the size of the predetermined region; and the predetermined region being positioned on the side where the dividing line is located; and to reduce the amount by which the size of the predetermined region is increased outwardly in the direction of the vehicle's width, for a certain period of time, when a traffic lane change made by the vehicle is detected.
类似技术:
公开号 | 公开日 | 专利标题
BR112014001824B1|2021-04-20|solid object detection device and method
US9558556B2|2017-01-31|Three-dimensional object detection device
US9591274B2|2017-03-07|Three-dimensional object detection device, and three-dimensional object detection method
BR112014001837B1|2021-04-20|moving body detection devices and methods
JP4801821B2|2011-10-26|Road shape estimation device
RU2636120C2|2017-11-20|Three-dimensional object detecting device
JP6560355B2|2019-08-14|Landmark recognition apparatus and recognition method
BR112013007085B1|2021-02-23|REPETITIVE FIXED OBJECT DETECTION SYSTEM AND METHOD
US10521676B2|2019-12-31|Lane detection device, lane departure determination device, lane detection method and lane departure determination method
JP5120227B2|2013-01-16|Ring stop detection device and rear image display device
JP2010262437A|2010-11-18|Navigation device, and method and program for deciding own vehicle travelling lane
JP2018147393A|2018-09-20|Sign recognition system
JP2012159469A|2012-08-23|Vehicle image recognition device
JP2020067698A|2020-04-30|Partition line detector and partition line detection method
BR112014020407B1|2021-09-14|THREE-DIMENSIONAL OBJECT DETECTION DEVICE
JP5888275B2|2016-03-16|Road edge detection system, method and program
JP6115429B2|2017-04-19|Own vehicle position recognition device
US20200193184A1|2020-06-18|Image processing device and image processing method
JP5974923B2|2016-08-23|Road edge detection system, method and program
CN113658252A|2021-11-16|Method, medium, apparatus for estimating elevation angle of camera, and camera
JP5493601B2|2014-05-14|Driving support device and driving support method
BR112015001872B1|2021-11-03|VEHICLE IMAGE RECOGNITOR
BR112015001804B1|2021-11-16|WATER DROPLET DETECTION DEVICE AND THREE-DIMENSIONAL OBJECT DETECTION DEVICE.
BR112015001865B1|2021-10-26|THREE-DIMENSIONAL OBJECT DETECTION DEVICE
BR112014020410B1|2021-10-05|THREE-DIMENSIONAL OBJECT DETECTION DEVICE
同族专利:
公开号 | 公开日
JP5761349B2|2015-08-12|
EP2741271B1|2019-04-03|
US9092676B2|2015-07-28|
CN103718224B|2016-01-13|
BR112014001824A2|2017-02-21|
MX2014000652A|2014-04-30|
RU2563534C1|2015-09-20|
CN103718224A|2014-04-09|
JPWO2013018673A1|2015-03-05|
US20140147007A1|2014-05-29|
WO2013018673A1|2013-02-07|
EP2741271B8|2019-09-11|
EP2741271A4|2015-08-19|
EP2741271A1|2014-06-11|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

JP2000149197A|1998-11-17|2000-05-30|Toyota Motor Corp|Vehicle circumference monitoring device|
JP2000161915A|1998-11-26|2000-06-16|Matsushita Electric Ind Co Ltd|On-vehicle single-camera stereoscopic vision system|
JP3846341B2|2002-03-20|2006-11-15|日産自動車株式会社|Vehicle rearward monitoring device|
DE10218010A1|2002-04-23|2003-11-06|Bosch Gmbh Robert|Method and device for lateral guidance support in motor vehicles|
JP3991915B2|2003-05-12|2007-10-17|日産自動車株式会社|VEHICLE DRIVE OPERATION ASSISTANCE DEVICE AND VEHICLE HAVING THE DEVICE|
US8130269B2|2005-03-23|2012-03-06|Aisin Aw Co., Ltd.|Visual recognition apparatus, methods, and programs for vehicles|
JP4855158B2|2006-07-05|2012-01-18|本田技研工業株式会社|Driving assistance device|
JP2008100554A|2006-10-17|2008-05-01|Yamaha Motor Co Ltd|Rear side visual recognition device|
JP4420011B2|2006-11-16|2010-02-24|株式会社日立製作所|Object detection device|
JP2008219063A|2007-02-28|2008-09-18|Sanyo Electric Co Ltd|Apparatus and method for monitoring vehicle's surrounding|
DE102007044535A1|2007-09-18|2009-03-19|Bayerische Motoren Werke Aktiengesellschaft|Method for driver information in a motor vehicle|
JP2009116723A|2007-11-08|2009-05-28|Denso Corp|Lane change support system|
JP5359085B2|2008-03-04|2013-12-04|日産自動車株式会社|Lane maintenance support device and lane maintenance support method|
KR101356201B1|2008-09-19|2014-01-24|현대자동차주식회사|Rear side sensing system for vehicle|JP5916444B2|2012-03-08|2016-05-11|日立建機株式会社|Mining vehicle|
WO2014017434A1|2012-07-27|2014-01-30|クラリオン株式会社|Image processing device|
CN105393293B|2013-07-18|2017-05-03|歌乐株式会社|Vehicle-mounted device|
JP6467776B2|2014-03-13|2019-02-13|株式会社リコー|Ranging system, information processing apparatus, information processing method, and program|
CN103942960B|2014-04-22|2016-09-21|深圳市宏电技术股份有限公司|A kind of vehicle lane change detection method and device|
JP6132359B2|2014-10-20|2017-05-24|株式会社Soken|Traveling line recognition device|
JP6532229B2|2014-12-18|2019-06-19|株式会社デンソーテン|Object detection apparatus, object detection system, object detection method and program|
JP6581379B2|2015-03-31|2019-09-25|株式会社デンソー|Vehicle control apparatus and vehicle control method|
US20170248958A1|2016-02-25|2017-08-31|Delphi Technologies, Inc.|Adjacent lane verification for an automated vehicle|
EP3435352B1|2016-03-24|2021-11-17|Nissan Motor Co., Ltd.|Travel path detection method and travel path detection device|
US9910440B2|2016-05-13|2018-03-06|Delphi Technologies, Inc.|Escape-path-planning system for an automated vehicle|
CN109804421A|2016-10-07|2019-05-24|日产自动车株式会社|Vehicle judgment method, driving path modification method, vehicle judgment means and driving path correcting device|
US11077853B2|2017-09-29|2021-08-03|Mando Corporation|Apparatus and method for controlling lane-keeping|
US10576894B2|2018-06-04|2020-03-03|Fca Us Llc|Systems and methods for controlling vehicle side mirrors and for displaying simulated driver field of view|
CN114078246A|2020-08-11|2022-02-22|华为技术有限公司|Method and device for determining three-dimensional information of detection object|
法律状态:
2018-12-11| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]|
2019-11-05| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]|
2021-03-23| B09A| Decision: intention to grant [chapter 9.1 patent gazette]|
2021-04-20| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 27/07/2012, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
申请号 | 申请日 | 专利标题
JP2011-168904|2011-08-02|
JP2011168904|2011-08-02|
PCT/JP2012/069094|WO2013018673A1|2011-08-02|2012-07-27|Object detector and object detection method|
[返回顶部]