专利摘要:
The invention relates to a method for projecting at least one image by a projection system of a motor vehicle comprising a device for detecting an event, an estimating device able to estimate the time to reach said event, an processing unit adapted to generate a control signal, a projection device adapted to receive the control signal and to project digital images, a storage unit storing at least one representative image of a pictogram. The method comprises the following steps: a) detection (30) of an event, b) estimation (32) of the time to reach the event, c) selection (34) of at least one image representing a pictogram characteristic of the detected event, and d) establishing (36) a sequence of images representing an animation of said pictogram, said sequence of images being clocked according to the estimated time during the estimation step b); d) projecting (46) said image sequence onto the roadway.
公开号:FR3056775A1
申请号:FR1659298
申请日:2016-09-29
公开日:2018-03-30
发明作者:Xavier Morel;Stephan Sommerschuh;Weicheng Luo;Hafid El Idrissi
申请人:Valeo Vision SA;
IPC主号:
专利说明:

© Publication number: 3,056,775 (to be used only for reproduction orders) (© National registration number: 16 59298 ® FRENCH REPUBLIC
NATIONAL INSTITUTE OF INDUSTRIAL PROPERTY
COURBEVOIE © Int Cl 8 : G 02 B 27/18 (2017.01), B 60 Q 1/02
A1 PATENT APPLICATION
©) Date of filing: 09.29.16. (© Applicant (s): VALEO VISION Joint-stock company (© Priority: simplified - FR. @ Inventor (s): MOREL XAVIER, SOMMERSCHUH STEPHAN, LUO WEICHENG and EL IDRISSI HAFID. (43) Date of public availability of the request: 30.03.18 Bulletin 18/13. ©) List of documents cited in the report preliminary research: Refer to end of present booklet (© References to other national documents (73) Holder (s): VALEO VISION Joint stock company related: folded. ©) Extension request (s): (© Agent (s): VALEO VISION Limited company.
METHOD FOR PROJECTING IMAGES BY A PROJECTION SYSTEM OF A MOTOR VEHICLE, AND ASSOCIATED PROJECTION SYSTEM.
FR 3 056 775 - A1 (3 /) The invention relates to a method for projecting at least one image by a projection system of a motor vehicle comprising an event detection device, an estimation device suitable estimating the time to reach said event, a processing unit capable of generating a control signal, a projection device capable of receiving the control signal and projecting digital images, a storage unit storing at least one image representative of 'a pictogram. The process includes the following steps:
a) detection (30) of an event,
b) estimation (32) of the time to reach the event,
c) selection (34) of at least one image representing a pictogram characteristic of the event detected, and
d) establishment (36) of a sequence of images representing an animation of said pictogram, said sequence of images being clocked as a function of the time estimated during the estimation step b);
d) projection (46) of said sequence of images on the roadway.
Method for projecting images by a projection system of a motor vehicle, and associated projection system
The present invention is in the field of automotive lighting.
In particular, the invention relates to a method of projecting images onto the roadway and to a system for projecting a motor vehicle.
Driving at night is more difficult than driving during the day. The panels can only be distinguished once they are in the range of projection lights. It is therefore difficult and tiring to drive at night. This difficulty can be further increased when the driver has to find his way on a route he does not know. The voice system of the Geolocation Satellite System (GPS) does not always provide sufficient help and it is inconvenient to take your eyes off the road to look at the screen of the GPS.
Also, it is desirable to provide a driving aid at night which makes it easier to read the indication signs or driving on an unknown route.
To this end, the subject of the invention is a method of projecting at least one image by a system for projecting a motor vehicle comprising a device for detecting an event, a processing unit capable of generating a signal of control, a projection device suitable for receiving the control signal and for projecting digital images, a storage unit storing at least one image representative of a pictogram. The projection process includes the following steps:
a) detection of an event,
b) estimated time to reach the event,
c) selection of at least one image representing a pictogram characteristic of the event detected, and
d) establishment of a sequence of images representing an animation of said pictogram, said sequence of images being clocked as a function of the time estimated during step b); and
d) projection of said sequence of images on the road.
According to particular embodiments, the projection method according to the invention includes one or more of the following characteristics:
- the projection method also includes the following steps:
• capture of an image of the driver of the motor vehicle and determination of the position of the driver in a predefined reference frame called the projection reference frame Rp, • calculation of a transformation matrix M as a function of the position of the driver determined, said sequence of images being established from said at least one selected image, from said transformation matrix and from predefined parameters;
- Said animation comprises at least the pivoting of the pictogram with respect to a horizontal axis and perpendicular to the direction of movement of the vehicle;
- The projection method comprises a step of adding at least one shadow area on said at least one selected image so that the pictogram is perceived in relief by said driver;
The animation represents an enlargement of the pictogram;
- The animation represents a movement of movement of the pictogram;
- The animation includes over-intensification or under-intensification of at least part of an image of the sequence of images;
- The event is an event among the arrival of a fork, a dangerous turn, a speed bump and a freeway exit.
- the detection device is a device for geographic location of the vehicle and in which the event is an indication of the route to follow to achieve a route selected by the driver.
- The method further comprises a step of capturing an image of the car driver and in which the step of determining the position of the driver in a predefined reference frame called the projection reference frame (Rp) is implemented from the captured image.
The invention also relates to a system for projecting images of a motor vehicle, said projection system comprising:
- an event detection device,
- a device for estimating the time to reach the event,
- a storage unit capable of storing at least one image representing a pictogram characteristic of an event,
a processing unit capable of selecting from the storage unit at least one image representing a pictogram characteristic of the event detected, the processing unit being configured to establish a sequence of images representing an animation of said pictogram, said sequence of images being clocked as a function of the time estimated by the estimation device, and
- a projection device capable of projecting said sequence of images onto the road.
According to particular embodiments, the projection system according to the invention has one or more of the following characteristics:
the processing unit is able to determine the position of the conductor in a predefined reference frame called the projection reference frame from the at least one captured image, the processing unit being able to calculate a transformation matrix as a function of the position of the determined conductor, the processing unit being able to establish said sequence of images from said at least one selected image, from said transformation matrix and from predefined parameters;
the projection device comprises a light source capable of emitting a light beam, an imager device capable of imaging the sequence of images and a projector capable of projecting the sequence of images onto the roadway;
the system includes an imager capable of capturing the car driver and the processing unit is able to search for the position of the driver on the captured image and to define the transformation matrix M from the position of the driver determined.
The invention will be better understood on reading the description which follows, given solely by way of example and made with reference to the figures in which:
- Figure 1 is a schematic view of the projection system according to a first embodiment of the invention,
FIG. 2 is a diagram representing the stages of the projection method according to a first embodiment of the invention,
FIG. 3 is a diagram representing the detail of a step of the process illustrated in FIG. 2,
FIG. 4 is a side view of a vehicle equipped with a projection system according to a first embodiment of the invention,
FIG. 5 is a perspective view of studs which can be imaged by the projection method according to the present invention, and
FIG. 6 is a perspective view of a first image of a sequence of images,
FIG. 7 is a perspective view of the last image of the sequence of images illustrated in FIG. 6,
FIG. 8 is a schematic view of the projection system according to a second embodiment of the invention,
- Figures 9 to 25 are figures from patent application number PCT / EP2016 / 071596.
The projection method according to the present invention is implemented by a projection system 2 shown diagrammatically in FIG. 1.
This projection system 2 comprises a device 4 for detecting an event, a device 5 for estimating the time for the vehicle to reach the event, a storage unit 6 suitable for storing images to be projected, and a unit for processing 10 connected to the detection device 4, to the estimation device 5 and to the storage unit 6.
The detection device 4 is able to detect an event. According to the present patent application, an event is for example the arrival of a fork, a dangerous turn, a speed bump, a motorway exit, etc.
The detection device 4 is, for example, a satellite geolocation system of the “GPS” device type (acronym of the term “Global Positioning System” in English).
As a variant, the detection device 4 can, for example, be constituted by a camera capable of filming the road and its aisles and an electronic unit provided with image processing software. The camera is able to acquire images representing the road as well as the panels located on the side of the road. The electronic unit is connected to the camera. It is suitable for detecting events from the processing of acquired images.
The device 5 for estimating the time for the vehicle to reach the event includes, for example, the "GPS" satellite geolocation system already used in the detection device 4.
As a variant, the estimation device 5 comprises a device for measuring the vehicle speed and an electronic processing unit connected to the device for measuring the vehicle speed and to the detection device 4. In this case, the electronic unit is able to determine the time to reach the event from the speed of the vehicle and the distance between the vehicle and the event, distance given by the determination device 4.
The storage unit 6 is a ROM, UVPROM, PROM type memory,
EPROM or EEPROM. It is suitable for storing images, each representing a pictogram.
A pictogram is a graphic sign representative of a situation whose meaning is likely to be understood quickly. The pictogram includes a figurative drawing and / or alphanumeric symbols. The pictograms are characteristic of the events. They can for example represent a symbol of a road sign, lines, guide arrows or road studs.
The storage unit 6 is also capable of storing the coordinates of the position of the conductor in a predefined reference frame, called the projection reference frame Rp. This reference frame Rp has been shown in FIG. the coordinates of the position of the driver's eyes in this frame of reference Rp. This position is an average position established from the, position of the eyes of several drivers having different sizes or different body sizes.
The processing unit 10 is a processor type calculation unit.
The projection system 2 according to the first embodiment of the invention further comprises a projection device 12 suitable for projecting images onto the road.
The projection device 12 comprises a light source 16 capable of emitting a light beam, an imaging device 18 capable of imaging a digital image from the light beam coming from the light source 16 and from a control signal coming from the unit processing, and a projector 20 shaped to receive the image imaged by the imaging device 18 and project it onto the roadway.
The light source 16 is, for example, constituted by a light-emitting diode and a collimator. As a variant, the light-emitting diode is replaced by a laser source.
The imaging device 18 is, for example, constituted by a matrix of micromirrors. The micro-mirror array is generally known by the acronym DMD (from the English "Digital Micro-mirror Device"). It is connected to the processing unit 10. It includes a large number of micro-mirrors distributed in rows and columns. Each micro-mirror is suitable for receiving a part of the light beam emitted by the light source 16 and for reflecting it in the direction of the projection device 20 or in the direction of a light absorber. All of the micro-mirrors are suitable for projecting a digital image.
As a variant, other types of imaging device can be used in the present invention, such as, for example, imaging devices based on MEMS, LED matrix or an LCD screen.
The projector 20 generally comprises an input lens and an output lens. These lenses are made of plastic and / or glass.
The output lens is for example a converging lens.
A projection device 29 according to a second embodiment is shown in FIG. 8. This projection device 29 is identical to the projection device 2 according to the first embodiment of the invention with the exception that the unit storage 6 does not include the coordinates of the position of the conductor in the projection reference frame Rp and that the projection system includes an imager connected to the processing unit. The imager is capable of imaging the driver of the motor vehicle. The imager is, for example, constituted by a camera or a camera. The anti-sleep device (in English driver monitoring System ”) camera could be used. The processing unit is able to find the position of the driver on the image captured by image processing. This image processing is carried out for example using an edge detection. In particular, the processing unit searches for the position of the driver's eyes on the captured image. Then, the position of the driver's eyes is defined in a frame of reference located on the projection device Rp.
As a variant, the sequence of images represents an animation of a pictogram clocked as a function of the time estimated by the estimation device 4, but steps 38 to 44 are not implemented. Consequently, the pictograms in this animation do not give the driver the visual impression of being established in a different plane from the roadway.
Referring to FIG. 2, the projection method according to the present invention begins with a step 30 of detecting an event produced by the detection device 4. After detection, the detection device 4 transmits to the processing unit 10 the event detection information as well as the information relating to the type of event detected.
During a step 32, the estimation device 5 estimates the time it will take for the motor vehicle to reach the event. It transmits this information continuously and in real time to the processing unit 10. Thus, if the motor vehicle starts to accelerate after a first estimate made by the estimation device, the device again estimates the time necessary to reach the event and transmits this second estimate to the processing unit 10.
During a step 34, the processing unit selects in the storage unit 6 at least one image representative of a particular pictogram characteristic of the event detected among all of the images stored in the storage unit and depending on the event detected by the detection device 4. The selected image is transmitted from the storage unit 6 to the processing unit 10. Thus, when the event relates to work along the aisles of the roadway, the processing unit 10 selects an image representative of an alignment of road studs, as shown in FIG. 5. When the event relates to a bifurcation on the right or a change of direction to the right, the processing unit 10 selects an image representative of an arrow, as shown in FIGS. 6 and 7.
During a step 36, the processing unit 10 establishes a sequence of images representing an animation of said pictogram, said sequence of images being clocked as a function of the time estimated during step 32.
According to a first embodiment of the invention, the animation comprises the pivoting of the pictogram represented on the selected image with respect to a horizontal axis AA and perpendicular to the direction of movement of the motor vehicle, as illustrated in FIG. 4. By example, FIG. 6 represents the projection on the roadway of the first image of a sequence of images representing an arrow directed to the right. FIG. 7 represents the projection onto the floor of the last image of this same sequence of images representing an arrow directed to the right.
The visual impression of pivoting of the image relative to the horizontal axis A-A requires special processing of the selected image. The detail of this particular treatment is shown in Figure 3 and will be described in detail below.
With reference to FIG. 3, the step of establishing a sequence of images begins with a step 38 during which the position of the conductor is determined in the projection reference frame Rp. This determination is carried out by searching for coordinates in the storage unit 6. In particular, the coordinates of the position of the driver's eyes are sought.
During a step 40, the processing unit 10 calculates a transformation matrix M as a function of the position of the driver's eyes in the projection reference frame Rp. This transformation matrix M is shaped so as to distort the image selected in such a way that the pictogram can appear to the driver's eyes as if it extended into a PV viewing plane different from the plane defined by the roadway.
An example of a PV plane has been shown in FIG. 4. By multiplying the transformation matrix M to the selected image, a distorted image of ίο is obtained so that the driver of the vehicle does not have the impression of viewing a image lying flat on the roadway in the ZP zone defined in bold in Figure 4, but he has the impression of visualizing an image that would extend in the PV plane. In reality, the image is well projected on the ZP zone of the roadway.
This PV plane extends at an angle ε to the plane of the roadway. The transformation matrix M is a function of this angle parameter ε. This angle ε is the angle between the road surface and the angle with which the pictogram appears to the driver's eyes.
One way to calculate this transformation matrix was the subject of the filing of a previous patent application on September 13, 2016 under number PCT / EP2016 / 071596. This previous patent application has not yet been published. This patent application was copied at the end of the description of this patent application to give an example of implementation of the present invention.
During a step 42, the processing unit generates a sequence of images which, when projected, gives a visual impression of the pictogram pivoting around the horizontal axis A-A.
This sequence of images is obtained by generation of several transformation matrices M having different angles ε then by multiplication of these transformation matrices M by the selected image.
The processing unit 10 is adapted to clock the sequence of images as a function of the time estimated during step 32 in such a way that the more the visual pivot printing speed is a function of the distance between the vehicle and the 'event. If the vehicle speed increases, the visual impression of slewing speed relative to the horizontal axis A-A is increased.
During a step 44, the processing unit 10 adds shaded areas to the images of the sequence of images to give the driver the visual impression that the pictogram represented on the transformed image is in relief. This addition of shadow is achieved by known image processing techniques. The processing unit 10 generates a control signal representative of the sequence of images and transmits it to the imaging device 18.
During a step 46, the projection device 20 projects the series of images onto the roadway clocked as a function of the time estimated by the estimation device 4.
Alternatively, step 44 may not be carried out.
According to a second embodiment of the invention not shown, the animation includes a movement of movement of the pictogram. For example, if the pictogram is an arrow pointing to the right. This arrow can be moved from the center of the roadway to the right. In this case, the selection step 34 comprises a step of selecting several images forming a preset sequence representing a displacement of the arrow to the right. The step of establishing the series of images then only consists of timing these images as a function of the time estimated by the estimation device 4. Steps 38 to 40 are not implemented in this embodiment .
According to a third embodiment of the invention, not shown, the animation includes an enlargement movement of the pictogram. Similarly in this case, the selection step 34 includes a step of selecting several images forming a pre-established sequence of images representing an enlargement of the pictogram.
An example of enlargement movement is shown in FIG. 5 in the case where the pictogram represents a road block treated so as to appear visually to the driver in a vertical plane.
In the example illustrated in FIG. 5, the processing unit selected, during step 34, a sequence of images. Then, a single transformation matrix M having a fixed angle ε was applied to the images of the selected sequence so that the driver has the visual impression that the road block extends in a vertical plane. Finally, during a step 36, the images distorted by the application of the transformation matrix M were clocked as a function of the time estimated by the estimation device 4.
According to a fourth embodiment of the invention not shown, the animation comprises over-intensification or under-intensification of at least part of an image of the sequence of images. This over-intensification or under-intensification is carried out by selecting a sequence of images comprising images comprising over-intensification or under intensification, as in the case of the second and of the third embodiment.
According to a fifth embodiment of the invention, the detection device 4 is a device for the geographic location of the vehicle of the GPS type and the event is an indication of the route to follow to achieve a route selected by the driver in his GPS geographic system. . The detection step 30 and the estimation step 32 are then carried out on the basis of information received from the device for geographic location of the vehicle of the GPS type.
According to a sixth embodiment of the invention implemented by the projection system 29 according to the second embodiment of the invention illustrated in FIG. 8, the processing unit 10 controls the imager 8 so that it captures an image of the driver seated in the motor vehicle. The captured image or images are transmitted to the processing unit 10.
Then, the processing unit 10 searches for the position of the conductor on the image captured by image processing. In particular, the position of the driver's eyes is sought on the captured image. This image processing is done, for example, using edge detection. Then, the position of the driver's eyes is defined in a frame of reference located on the projector 20. This frame of reference is called the projection frame of reference Rp. It is illustrated in FIG. 4.
According to a particularly advantageous variant, the contrast profile of the projected pictogram is reinforced with respect to the average light environment of the background beam, on which or in which the pictogram is included.
To this end, the borders of the pictogram, from the outside thereof towards the inside and in at least one dimension (width or height) of the projection plane of the pictogram, alternate between at least two zones of different intensity with respect to the average intensity of the background beam, a first zone being of stronger or lower intensity compared to this average intensity and the second zone being respectively of weaker or stronger intensity compared to this average intensity. In an alternative embodiment, the second zone constitutes the heart or central zone of the pictogram and is then bordered at least in one dimension by the first zone.
Thus, the perception by the driver or third parties of the message constituted by the projected pictogram is reinforced, the reaction time compared to the projected message is reduced and driving safety is in fact improved.
The intensity gradient and the intensity level applied may be constant or vary along the pattern in a direction of the projection dimension considered (width or height, for example from left to right respectively or from bottom to bottom high, corresponding to a near field projection of the vehicle towards the horizon). In addition, this variation can be static or dynamic, that is to say controlled according to the environment of the vehicle: for example, depending on the imminence of an event, the contrast can be dynamically reduced or increased , so as to generate a ripple effect of the pattern which will appear more or less clear in the background beam and to draw the attention of the driver or third parties to the imminence of the event corresponding to the projected pictogram ( exit or turn arrow, collision alert, pedestrian crossing, etc.). This further improves driving safety.
These different embodiments can be combined.
We have copied the patent application number PCT / EP2016 / 071596 below.
This patent application number PCT / EP2016 / 071596 and its various applications will be better understood on reading the description which follows and on examining the figures which accompany it.
- Figure 9 shows a flow diagram of the steps of the method of projecting at least one image onto a projection surface according to a non-limiting embodiment of patent application number PCT / EP2016 / 071596;
- Figure 10 shows a motor vehicle comprising a lighting device suitable for implementing the projection method of Figure 9 according to a non-limiting embodiment of patent application number PCT / EP2016 / 071596;
- Figure 11 shows a light intensity map established according to a step of the projection method of Figure 9 according to a non-limiting embodiment of patent application number PCT / EP2016 / 071596;
- Figure 12 shows a projector which integrates a light module and the direction of a light ray of a light module of said projector, said light module being adapted to perform at least one step of the projection method of Figure 9;
- Figure 13 shows a flowchart illustrating sub-steps of a step of establishing a luminance map of the projection method of Figure 9 according to a non-limiting embodiment;
- Figure 14 shows the projector of Figure 12 and a point of impact of the light beam on the ground;
- Figure 15 shows the projector of Figure 14 and the illumination of the point of impact;
- Figure 16 indicates the site angle and the azimuth angle taken into account in a step of calculating the observation position of an observer of the projection method of Figure 9;
- Figure 17 schematically shows an impact point, an observation position of an observer outside the motor vehicle in an image reference frame and an image to be projected by the projection method of Figure 9;
- Figure 18 illustrates an image projected according to the projection method of Figure 9, image which is seen from the point of view of the driver of said motor vehicle but which is only understandable by an observer outside the motor vehicle;
FIG. 19 illustrates an image projected according to the projection method of FIG. 9, an image which is seen from the point of view of a rear passenger of said motor vehicle but which is only understandable by an observer outside the motor vehicle;
FIG. 20 illustrates an image projected according to the projection method of FIG. 9, an image which is seen from the point of view of said observer outside the motor vehicle and which is understandable by said observer outside the motor vehicle;
- Figure 21 shows a flowchart illustrating sub-steps of a step of defining the coordinates of a projection of a luminance point of the projection method of Figure 9 according to a non-limiting embodiment of the patent application PCT / EP2016 / 071596 number;
- Figure 22 shows schematically the point of impact, the observation position of the observer outside the motor vehicle and the image to be projected of Figure 17 by the projection method of Figure 9 and the coordinates of the intersection between the point of impact and the image to be projected;
- Figure 23 shows schematically the point of impact, the observation position of the observer outside the motor vehicle and the image to be projected from Figure 22 normalized; and
- Figure 24 schematically represents pixels of the image to be projected in Figure 22; and
- Figure 25 illustrates a lighting device suitable for implementing the projection method of Figure 9.
DESCRIPTION OF MODES FOR CARRYING OUT THE PATENT APPLICATION PCT / EP2016 / 071596
Identical elements, by structure or by function, appearing in different figures keep, unless otherwise specified, the same references.
The MTH projection method for a motor vehicle of at least one image on a projection surface by means of a light module ML according to patent application number PCT / EP2016 / 071596 is described with reference to FIGS. 9 to 25.
By motor vehicle is meant any type of motor vehicle.
As illustrated in FIG. 9, the MTH process comprises the steps of:
- detect an observation position PosO1 of an observer O in a light module RP reference frame (illustrated step DET_POS (O, PosO1, RP));
- calculate the observation position PosO2 of the observer O in an image reference frame RI (illustrated step DET_POS (O, PosO2, RI));
- projecting said image Ip onto said projection surface S as a function of said observation position PosO2 of the observer O in said image reference frame RI, said image Ip being integrated into said light beam Fx of the light module ML (illustrated step PROJ ( Fx, Ip, S)).
As illustrated in FIG. 9, the projection of said image Ip comprises the substeps of:
- 3a) from a light intensity map CLUX of the light module ML comprising a plurality of intensity indicators pf, calculate a map of luminance CLUM on the projection surface S resulting in luminance points pl (step illustrated CALC_CLUM (CLUX, S, pi));
- 3b) calculate the position PosL2 of each luminance point pl in the image reference frame RI (illustrated step CALC_POS (pl, PosL2, O, RI));
- 3c) from its position PosL2 and from the observation position PosO2 of the observer O in said image reference frame RI, define the coordinates ply, plz of the projection plr of each luminance point pl in the image plane P1 of said image to be projected Ip (illustrated step DEF_PLR (plr, P1, PosL2, PosO2));
- 3d) if said projection plr belongs to said image to be projected Ip, define coordinates lig, col of the corresponding pixel Pix (illustrated step DEF_PIX (pl (lig, col), ply, plz);
- 3e) for each projection plr of a luminance point pl belonging to said image to be projected Ip, correct the intensity value Vi of the corresponding intensity indicator pf as a function of the color Co of the corresponding pixel Pix (step illustrated MOD_PF (pf, Vi, Pix, Co)).
Note that the first step 3a in particular, as well as step 3b in particular, can be carried out prior to the iterations of the following steps. More generally, the steps described are not necessarily carried out sequentially, i.e. in the same iteration loop, but can be the subject of different iterations, with different iteration frequencies.
The step of projecting the image Ip further comprises a sub-step 3f) of projecting onto the projection surface S the light beam Fx with the intensity values Vi corrected for the intensity indicators pf (step illustrated on the figure 9 PROJ (ML, Fx, Vi, pf)).
The MTH projection method is suitable for projecting one or more Ip images at the same time. In the following description, the projection of a single image is taken as a non-limiting example.
Note that the projection can be done at the front of the motor vehicle V, at the rear or on its sides.
The light module ML makes it possible to produce a light beam Fx, said light beam Fx comprising a plurality of light rays Rx which follow different directions.
The light module ML makes it possible to modify the intensity value Vi of each intensity indicator pf, it is therefore a digital light module. As described below, the image to be projected Ip is thus integrated into the light beam Fx of the light module ML.
Note that the CLUX light intensity card is discretized so that it can be used digitally.
The light module ML is considered to be a point light source from which the space around said light source is discretized. Thus, an intensity indicator pf is a point in the space illuminated by the light module ML which is has a certain direction dir1 and a given intensity value Vi provided by the light module ML in said direction dir1. The direction dir1 is given by two angles Θ and δ (described below).
In a nonlimiting embodiment, the projection surface S is the ground (referenced S1) or a wall (referenced S2). The image that will be projected Ip on the floor or the wall is thus a 2D image.
In a nonlimiting embodiment illustrated in FIG. 0, a lighting device DISP of the motor vehicle V comprises at least one light module ML and is suitable for implementing the projection method MTH. In the nonlimiting example illustrated, the lighting device is a projector.
As will be seen below, the observation position of the observer O is taken into account for the projection of the image to be projected Ip. For this purpose, the image to be projected Ip will be distorted so that it can be understood by the observer in question, whether it is the driver or a front or rear passenger of the motor vehicle or an observer outside the motor vehicle.
We thus place ourselves from the point of view of the observer O for whom we want to project the image Ip. From the viewer's point of view, the Ip image will not be distorted.
From a different point of view from said observer, the Ip image will be distorted.
In nonlimiting exemplary embodiments, an observer O outside the vehicle is a pedestrian, a driver of another motor vehicle, a cyclist, a biker, etc. It can be located at the front, rear or on one side of the V motor vehicle.
In a nonlimiting embodiment, the projected image Ip comprises at least one graphic symbol. This graphic symbol will make it possible to improve the comfort and / or the safety of the observer O. In a nonlimiting example, if the observer O is the driver of the motor vehicle, the graphic symbol may represent the speed limit not to overtaking on the road, a graphic STOP symbol when the motor vehicle is reversing and an obstacle (pedestrian, wall etc.) is too close to the motor vehicle, an arrow which helps it when the motor vehicle is about to turn on a road etc.
In a nonlimiting example, if the observer O is outside the motor vehicle such as a pedestrian or a cyclist, the graphic symbol may be a STOP signal to indicate to him that he must not cross in front of the motor vehicle because the latter will restart.
In a nonlimiting example, if the observer O is outside the motor vehicle such as a follower motor vehicle, the graphic symbol may be a STOP signal when the motor vehicle in question brakes so that the driver of the follower vehicle brakes in turn. In another nonlimiting example, if the observer O is outside the motor vehicle and is a motor vehicle which protrudes from the side, the graphic symbol may be a warning symbol to indicate to said motor vehicle to fall back because another vehicle automobile arrives opposite.
As shown in Figure 10, the projected image Ip is a STOP symbol. It is oriented on the projection surface S, here the ground in the nonlimiting example illustrated, so that the observer O can see and understand this STOP symbol. In the nonlimiting example illustrated, the projection is made in front of the motor vehicle V and the observer O is outside the motor vehicle V.
The different stages of the MTH projection process are described in detail below.
• 11 _ _Déteçtion_ _de_ _la_ .position _ _d'g_b_s_ery_a_tign _ _d_e _] ’p_b_serv_ate_u_r_ _dan_s_ _ le. RP light module repository
To detect the observation position PosO1 of the observer O in the light module repository RP, it is necessary to detect the position of the observer O himself in the light module repository RP. For this purpose, in a nonlimiting example, a camera (not shown) is used. It is suitable for detecting and calculating the position of an observer O which is outside the motor vehicle V.
In nonlimiting embodiments, the camera is replaced by a radar, or a lidar.
For an observer O who is inside the motor vehicle (driver or passengers), we consider for example reference observation positions. Thus, in a nonlimiting example, it is considered that the driver's eye is at the position PosO1 (1.5; -0.5; 1) (expressed in meters) of the light module ML in the case of a vehicle automobile which is a car. Of course, if the motor vehicle is a truck, the position of the eye relative to the light module ML is different.
For an outside observer, from the position of said observer O, we can deduce its observation position PosO1 which corresponds to the position of his eye. For example, we locate the position of his eye about 1.5 meters from the ground.
Since such detection of the position of the observer is known to a person skilled in the art, it is not described in detail here.
• 2) _Ça | çul_ de. | a _posi_tLQ.n 2d’p_b_s_eiy_ation_de_robse_ryateur _dans_l_e_ repository image. RI
The observation position PosO1 of the observer O was previously determined according to the light module RP reference system. It will be used for the change of reference described below.
This step changes the coordinate system. We indeed go from the light module repository RP (defined by the axes pjx, pjy, pjz) to the image repository RI (defined by the axes Ix, ly, Iz) of the image to be projected Ip.
The calculation of the observation position PosO2 of the observer O in the image reference frame RI is based on at least one transformation matrix M from the light module frame of reference RP to said image frame of reference RI.
In a nonlimiting embodiment, the PosO2 position is of the form:
In a nonlimiting embodiment, said at least one transformation matrix M takes into account at least one of the following parameters:
- the Poslp position of the image to be projected Ip in the RP light module repository;
- Rotlp rotation of the image to be projected Ip in the RP light module repository;
- the scale of the image to be projected Ip
The position Poslp of the image to be projected Ip is deduced from the light module repository RP according to a translation along the three axes pjx, pjy, pjz of said light module repository RP.
In a nonlimiting embodiment, the transformation matrix M is of the form:
hL 4-û ne - where a, e and i are the affinity terms; b, c, d, f, g and h the terms of rotation; and t, u and v the terms of translation.
The terms of affinity a, e and i make it possible to achieve an enlargement or a shrinking of the image Ip, for example one increases the total size (homothety) by 50% or one reduces it by 20 %%, by increasing by 50 %, respectively by reducing by 20%, the values of a, e and i. For example, a value of a, e and i equal to 1 corresponds to a predetermined reference dimension of the projected image, respectively in the directions pjx, pjy and pjz. It is also possible to apply the enlargement or shrinking factors according to only one of the dimensions, or two of the dimensions (non-homothetic). It is also possible to apply different enlargement or shrinking factors on certain dimensions compared to others, in particular it is possible to apply different enlargement or shrinking factors on each dimension. In this way, depending on the position PosO2 of the eye of the observer O, we can decide to project an image that appears to the observer O more or less large overall or according to some of the dimensions, depending on whether the values of a, e and i increase or decrease respectively. .
Note that the Rotlp rotation depends on three angles which are as follows:
- β: azimuth (which indicates whether the image to be projected is located to the right or to the left of the observer, for example when the latter looks to the right or to the left);
- Ω: tilt (which indicates the inclination of the image to project Ip, for example when the observer tilts his head to the side. This amounts to tilting the image Ip);
- ε: site (which indicates the effect that we want to give to the graphic symbol of the Ip image).
FIG. 16 illustrates the site and azimuth angles and the plane P1 of the image to be projected Ip.
We thus have PosO2 = M * PosO1.
PosO1 is the observation position of the observer O used for the projection of the image Ip in the reference light module RP.
PosO2 is the observation position of the observer O used for the projection of the image Ip in the image reference frame Rl.
Thus, the position and rotation of the image to be projected Ip are adapted as a function of the observer O. In this way, the image to be projected Ip will be understandable by the observer O. This gives an affine deformation of the image of the point of view that one desires, called anamorphosis.
Thus, for the eye of a driver of a car, the projected image Ip is not distorted. In the same way, for the eye of a truck driver, although it is positioned well above the RP light module reference frame, the projected image Ip is also not distorted. Finally, for an outside observer, the projected image Ip is also not distorted.
Note that the projected image Ip can thus be clearly visible to the observer since its projection depends on the observation position of the observer O and that its scale can be adjusted as desired. Thus, even if it is far from the motor vehicle, the observer O will still be able to understand and see the graphic symbol (s) of the projected image Ip.
• 31 _Pr_ojeçt ip_n_ de. Γimage] p_s_ur_ la su_rfaçe_de_prpjeçt ion
This_tap_e includes.the following spus-_é_tap_e_s ,:
• 3aLÇa [çul_dO_ne_cartgg_rap_hje_de_lumi_n_ançe_ÇL_UM
In a nonlimiting embodiment, the light intensity card CLUX is stored in a memory. It will have previously been established during product design, using a goniophotometer (not shown). The goniophotometer is for example of type A, that is to say that the rotational movement around the horizontal axis supports the rotational movement around the vertical axis adjusted for rotation around the horizontal axis. The CLUX light intensity card gives the intensity indicators pf of the light module ML considered as a point light source. The direction dir1 of a light ray Rx starting from the light module ML is expressed as a function of two angles Θ and δ and is given by the following formula:
cïrecrion = I £ in. 6 I '• cos6 - sm J'
With δ the vertical rotation V of the goniophotometer; and Θ the horizontal rotation H of the goniophotometer.
The light intensity map CLUX thus comprises a plurality of indicators of intensity pf whose direction dir1 is given by the above formula, with θ the horizontal angle of the indicator of intensity pf, and δ l vertical angle of the intensity indicator pf. The CLUX light intensity card is shown in Figure 11. We can see an intensity indicator pf with polar coordinates ô = 0V, θ = 0Η. The CLUX light intensity card thus makes it possible to determine an intensity Ι (θ, δ) for a given direction.
We thus have:
CLUX = {(δί, 0j, Iij), (i, j) e [l, M] x [l, N]}, where M and N are the numbers of discretization points (or indicators of intensity) of the beam luminous Fx in the vertical and horizontal directions (respectively).
An intensity indicator pf is therefore defined by its direction dir1 and its intensity Ι (θ, δ).
FIG. 12 illustrates a lighting device DISP comprising a light module ML with the direction of a light ray Fx.
The calculation of the CLUM luminance mapping on the projection surface S comprises the following substeps illustrated in FIG. 13.
- i) a first calculation of the position POSpf of said intensity indicators pf on the projection surface S resulting in impact points pi (illustrated step CALC_POSF (pf, POSpf, pi));
- ii) a second calculation of a CECL illumination mapping of said impact points pi (illustrated step CALC_CECL (pi, CECL));
- iii) a third calculation of the CLUM luminance mapping of said impact points pi from the illumination mapping CECL resulting in said luminance points pi (illustrated step CALC_CLUM (pi, CECL)).
The different sub-steps are detailed below.
It will be noted that the calculations below are carried out as a function of the projection surface S (ground S1 or wall S2).
o sub-step i)
The first calculation is based on:
- the position POSpj of the light module ML in the Cartesian coordinate system x, y, z; and
- the direction dir1 of said intensity indicators pf described above.
For soil S1, we thus obtain the position POSpfl of the intensity indicator pf on the soil in the Cartesian coordinate system x, y, z with the following formula.
POSpfl = POSpj - (POSpj.z / dir1.z) * dir1.
With POSpj.z, the z value of the position of the light module ML (height of the light module above the ground) and dirl.z, the z value of the directing vector of the light ray Rx.
For wall S2, we thus obtain the position POSpf2 of the intensity indicator pf on the wall in the Cartesian coordinate system x, y, z with the following formula.
POSpf2 = POSpj - (D / dir1, x) * dir1.
With
- dir1 .x, the value x of the directing vector of the light ray Rx;
- D, the distance between the light module ML and the wall. In a nonlimiting example, D is equal to 25 meters.
This gives an impact point pi (in position POSpfl or POSpf2) on the ground S1 or on the wall S2. FIG. 14 illustrates a nonlimiting example of point of impact pi on a projection surface S which is the ground S1.
o sub-step ii)
Once the point of impact pi on the ground S1 or on the wall S2 has been determined, the illumination E of this point of impact pi is calculated from the intensity Ι (θ, δ) of the indicator of intensity pf previously determined.
For the ground S1, the illumination Er of the point of impact pi on the ground is thus obtained with the following formula.
E r = - (Ι (θ, δ) / distl 2 ) * cos0 * sinô
With distl, the distance between the point of impact pi and the light module ML.
For the wall S2, the illumination E M of the point of impact pi on the wall is thus obtained with the following formula.
Em = (I (0, ô) / distl 2 ) * cos0 * cosô
With distl, the distance between the point of impact pi and the light module ML.
FIG. 15 illustrates the illumination E (delimited by a dotted circle) of an impact point pi on a projection surface S which is the ground S1.
o sub-step iii)
The third calculation is based on:
- the illumination E of said impact points pi;
an eye / eye position vector between the position of an impact point pi of the CECL illumination mapping and the observation position PosO1 of the observer O (in the light module RP reference frame); and
- a light scattering function d.
d is a known function which makes it possible to calculate the scattering of light by the projection surface S. It will be noted that it varies as a function of the nature of the projection surface S. For example, the function d is different if the surface is asphalt, concrete, tar, pavers etc.
For the ground S1, the luminance Lr of the point of impact pi on the ground is thus obtained with the following formula.
Roei]
With
Roei] the z-value of the normalized Roeil vector.
For the wall S2, the luminance L M of the impact point pi on the wall 10 is thus obtained with the following formula.
Moeil
Il-iî- x
Àfoef /
With 11 11 the x value of the normalized Moeil vector.
In a nonlimiting embodiment, it is assumed that the projection surface S emits uniformly in all directions. In this case, the diffusion parameter d does not depend on the angles δ and Θ.
In a nonlimiting embodiment, the projection surface S is considered a Lambertian diffuser (for example a gray body). There is then a constant luminance on the projection surface S proportional to the illumination E and in this case the diffusion function d is a cosine. In this case, L R = a /% Er char .R,. '*, CCS | 3 CO S ....... = ...... : ................................ ........-
where a is the material albedo.
and I_m = a / π Em
In nonlimiting examples, the albedo of asphalt is 7%, and that of concrete varies between 17% to 27%.
θ · 3b) _Ç_aJçu | _des_positioris_ de_s_p_qin_ts_de_luminance _pl_ in Je. i_macje_ RI repository
The position PosL1 of a luminance point pl was previously determined according to the RP light module reference system. It will be used for the change of reference described below.
In the same way as for the calculation of the observation position PosO2 of the observer O, this step makes a change of reference. We indeed go from the light module repository RP (defined by the axes pjx, pjy, pjz) to the image repository RI (defined by the axes Ix, ly, Iz) of the image to be projected Ip.
The calculation of the position PosL2 of a luminance point pl in the image reference frame RI is based on said at least one transformation matrix M from the light module reference frame RP to said image reference frame RI (transformation matrix M described above).
In a nonlimiting embodiment, the PosL2 position is of the same form as the PosO2 position described above:
It will be noted that the transformation matrix M has been described during the calculation of the observation position PosO2 of the observer O in the image reference frame RI. It is therefore not detailed again here.
We thus have PosL2 = M * PosL1.
PosL1 is the position of the luminance point pl in the reference light module RP.
PosL2 is the position of the luminance point pl in the image reference frame RI.
FIG. 17 illustrates the image to be projected Ip as well as the image repository RI. We can also see the luminance point pl and the eye of the observer O (which corresponds to the observation position) with their respective positions PosL2 and PosO2 defined in the image reference frame RI.
Note that although the projected image Ip on the floor or wall is 2D, (two dimensions), you can get a 3D effect (three dimensions), that is to say a perspective or trumpet effect. the eye, by adjusting the site angle ε seen above. Observer O (whether the driver, a passenger, or an outside observer) will see the image in perspective. For this purpose, the site angle ε is greater than -90 °.
In particular, it is greater than -90 ° and less than or equal to 0 °. The 3D effect is thus visible between 0 and up to -90 ° (not included).
Note that at -90 ° the IP image is placed on the ground and therefore has no 3D effect.
Figures 18 to 20 illustrate a projected image Ip which is a pyramid. An observer O who is outside the motor vehicle, such as a pedestrian, is taken as a non-limiting example. The pyramid is visible from three particular points of view, the point of view of the driver (figure 18), the point of view of a rear passenger (figure 19) and the point of view of the pedestrian (figure 20), but n ' is seen in effect 3D only from one point of view. In the nonlimiting example illustrated, only the pedestrian will see the pyramid in 3D (as illustrated in FIG. 20). From the driver's or passenger's point of view, the pyramid appears distorted.
In a nonlimiting variant, the site angle ε is equal to 0. The observer O looks straight ahead. In this case, observer O will see the image, namely the pyramid here, as if it were standing.
In a nonlimiting variant, the site angle ε is substantially equal to -35 °. This allows for a 3D effect raised in the direction of the road.
The plane P1 of the image Ip is thus perpendicular to the direction of observation of the observer O.
If the site angle ε is different from -90 °, the pyramid will be visible in 3D but more or less inclined.
• Ç) .definé _le_s_ .coordinées joly, _ plz _de_ la .projection, pjr. .a, .point, de l.u.minançe.pl
As illustrated in FIG. 21, in a nonlimiting embodiment, the definition of the ply, plz coordinates of a projection plr of a luminance point pl comprises the substeps of:
- i) calculate the point of intersection Int between (illustrated sub-step CALC_INT (PosO2, PosL2, P1)):
- the line V (PosO2, PosL2) passing through the observation position PosO2 in said image reference frame RI of the observer O and through the position PosL2 in said image reference frame RI of said luminance point pl; and
- the image plane P1 of the image to be projected Ip.
- ii) determine the coordinates ply, plz of said intersection point Int from the dimensions L1, H1 of said image to be projected Ip (illustrated substep DEF_COORD (lnt, L1, H1).
These two substeps are described below.
o sub-step i)
In the image reference frame RI, the point of intersection Int between the line (eye, luminance point) and the image plane P1 is the point on the line (eye, luminance point) for which Ix = 0. We thus have:
Int = PosO2 - ((PosO2.x) / (V (PosO2, PosL2) .x)) * V (PosO2, PosL2)
With
- V (PosO2, posL2) the vector representing the line (eye, luminance point) in the image reference frame RI;
- V (PosO2, posL2) .x the x value of the vector;
- Int the point of intersection between the line (eye, pl) and the image to be projected Ip in the image repository RI. The point of intersection Int is thus the projection plr of the luminance point pl on the image plane P1 of the image to be projected Ip;
- PosL2.x the value x of the position of the luminance point pl;
- PosO2.x the x value of the observer's observation position.
Note that we assume that the observation position of the observer O is placed on the axis Ix.
FIG. 22 illustrates the image to be projected Ip, the point of intersection Int which corresponds to the projection plr of the point of luminance pl on said plane P1 and the vector V (posO2, posL2) (illustrated in dotted lines). Note that the projection plr is of the central type, so as to produce a conical perspective effect. Thereafter, the term plr projection or central projection plr will be used interchangeably.
o sub-step ii)
The coordinates ply, plz of the central projection plr of the luminance point pl in the image frame RI correspond to the coordinates along the ly (vertical) axis and along the Iz (horizontal) axis of the position of the determined Int intersection point previously. In a nonlimiting embodiment, they are expressed in meters.
We deduce the coordinates of this point in the frame of figure 22 by the following formulas:
ply = (lnt.y + (L1 / 2)) / L1 plz = lnt.z / H1
With,
- L1 the width of the image to be projected Ip (expressed in meters in a nonlimiting example);
- H1 the height of the image to be projected Ip (expressed in meters in a non-limiting example);
- Int.y the y value of the intersection point;
- Int.z the z value of the intersection point.
Figure 22 illustrates the definition of the ply and plz coordinates in meters in the RI image repository.
Note that L1 and H1 are input parameters of the MTH projection process.
This sub-step makes it possible to determine subsequently whether the coordinates ply, plz belong to the image to be projected Ip (they must then be between 0 and 1) and therefore whether the central projection plr of the luminance point pl belongs to l image to project Ip.
To this end, in a nonlimiting embodiment, the image to be projected Ip and the coordinates of the projection plr thus calculated are normalized. This simplifies the test of belonging to the image to project Ip.
This gives a normalized frame of reference IX (vertical axis), IY (horizontal axis) as illustrated in FIG. 23. The value of the coordinates ply, plz of the projection plr is now between 0 and 1. In the example illustrated, the ly and Iz axes which have become the IX and -IY axes respectively. Image dimensions H2, L2 are thus obtained between 0 and 1.
Figure 23 illustrates the definition of ply and plz coordinates in unitless values in the RI image repository.
Note that the size (L1, H1) of the image to project Ip can be defined in this step 3c) or in the step with the transformation matrix M.
Since the dimensions L1 and H1 and therefore L2, H2, the position and the rotation of the image to be projected Ip are known (these are input parameters of the MTH projection method), one can easily determine, via its coordinates ply, plz, whether or not the projection pl belongs to the image to be projected Ip.
• 3d) .Define the corresponding _c_o_or_d_onnée_s_du_pixel_P ix
The definition of the line (lig), column (col) coordinates of the pixel Pix is carried out for each projection plr (of luminance point pl) which belongs to the image to be projected Ip, namely which is located inside the rectangle L2 * H2 of the image to project Ip, which was verified in step 3c-ii).
Thus, if the projection plr belongs to the image to be projected Ip, the coordinates of the corresponding pixel Pix are calculated. They are calculated as follows.
Lig = - plz * L2
Col = ply * H2
With,
- lig, the line of the pixel;
- col, the pixel column;
- L2 the width of the image to project Ip (this time expressed in pixels);
- H2 the height of the image to be projected Ip (this time expressed in pixels);
- ply the coordinate of the plr projection along axis IX;
- plz the coordinate of the projection plr along the axis IY.
With the coordinates lig, col of the pixel Pix, we can recover the value of its • 3e) _ Correction. _d_e _ ja _.valeur._d'intiversité__d_e _j’Lndi_c_a_tric_e_.djntENSite._pf çorre ^ waving color Co in the image that we want to project.
In a nonlimiting example, the value is between 0 and 255. It is thus possible to go from white to black by passing through several shades of gray as illustrated in FIG. 24. By the term white, it is necessary to understand any single color and by the expression of shades of gray must be understood the shades obtained from said single color between its lightest shade and black. Thus the projected image is not necessarily composed of the color white and shades of gray associated with the Co values between 0 and 255, but more or less dark shades of any color visible by the human eye. Advantageously, it is white, yellow, blue, red or amber.
We then correct the intensity value Vi of the corresponding intensity indicator pf.
Note that this is possible because the light module ML is digitalized.
In a first nonlimiting embodiment, the correction is carried out as follows:
Vi = o.ViO * Co / 255.
With:
- ViO the initial intensity value of the intensity indicator pf of the light module,
- Co the color of the corresponding Pix pixel; and
- σ a maximum over-intensification factor.
In a second non-limiting embodiment, the correction is carried out as follows:
Vi = (p.Co, with φ a luminance coefficient. This performs a substitution of the luminances. This makes it possible to display the image on a background independent of the basic light distribution.
This step is carried out for all the luminance points pl whose central projection plr belongs to the rectangle L2 * H2 of the image to be projected Ip.
Thus, the light module ML can project onto the projection surface S the light beam Fx comprising the light rays Rx with the intensity values Vi corrected for the intensity indicators (step 3f) illustrated in FIG. 9 PROJ (ML, Fx , pf, Vi). This will display the correct Co color for the intensity indicator being considered. In this way, the image to be projected Ip is integrated into the light beam Fx of the light module ML (since it is produced by said light module ML itself) and is projected onto the projection surface S with the right colors.
Thus, as a function of the desired color Co of a pixel Pix, a correction factor is applied to the intensity value Vi of the corresponding intensity indicator pf. Thus, one can obtain intensity indicators whose color does not depend on the light intensity of the light beam Fx itself. For example, the projected pyramid illustrated is of uniform color.
In the case of a light source independent of the light module ML which projects said pyramid superimposed on said light beam, this would not be the case. The pixels of the image would be more or less illuminated depending on the distribution of the light intensity of said light beam. Their color would thus vary according to the light intensity of said light beam.
Furthermore, the fact that the image to be projected Ip is integrated into said light beam Fx and not in superimposition makes it possible to obtain a better contrast of the image on the projection surface S than in the case of using d '' an independent light source. In the case of an independent light source, the light beam also illuminates the projected image. The latter is therefore lighter in color.
It should be noted that the color value Co of a pixel, or of a series of pixels corresponding to predetermined parts of the projected image, can also be used to enhance the 3D effect. For example, with reference to FIG. 12, the pixels corresponding to the face F1 of the pattern of the projected image and those corresponding to the face F2 of the pattern of the projected image, may include specific and different color values Co. Thus the face F1 appears brighter than the face F2 or vice versa depending on whether the value of the color Co corresponding to the pixels making up the face F1 is higher or lower than that corresponding to the pixels making up the face F2. The value of the color Co corresponding to the pixels making up the face F1 and / or F2 can also vary so as to produce a gradient effect, for example from one edge to the other of the face F1 and / or F2, making it possible to further enhance the 3D effect.
It is possible to obtain multicolored images using several systems operating according to the method described above and each emitting a visually different color. The images projected by each system are then calculated to project onto the projection surface S in a superimposed manner in order to obtain a global multicolored projected image.
Note that as the projection of the image to be projected Ip depends on the observation position of the observer O, it is therefore continuously updated as a function of the movement of the observer O relative to the motor vehicle when the latter is outside the motor vehicle and as a function of the movement of the motor vehicle itself when the observer O is inside the motor vehicle. . In a nonlimiting embodiment, the refresh frequency of the calculations presented above is thus a function of the speed of movement of the observer relative to the motor vehicle for the case of an outside observer. The more the speed increases, the more the refresh rate increases. The lower the speed, the lower the refresh rate.
In another nonlimiting embodiment, the refresh frequency of the calculations presented above is constant. In a nonlimiting example, the frequency is one second.
Thus, these calculations being performed in real time, it is not necessary to have a database with images of the same graphic symbol preloaded in memory corresponding to several imaginable observation positions of the observer relative to the motor vehicle (when outdoors), or in the motor vehicle (when indoors).
The MTH projection method thus makes it possible to project one or more images Ip onto a projection surface S which is not only visible by an observer located inside or outside the motor vehicle but also understandable by him since the image projected Ip is oriented in the direction of the gaze of said observer O.
It will be noted that in the case where several images Ip are projected at the same time, the combination of the different images with the light beam Fx is calculated before projecting the overall result.
In a nonlimiting embodiment, the MTH projection method is implemented by a DISP lighting device for motor vehicle V.
In a nonlimiting embodiment, the DISP lighting device allows the realization of a photometric function such as a low beam, high beam or a front, rear and / or side signaling function. Thus, the lighting device is located at the front of the motor vehicle or at the rear.
The lighting device DISP is illustrated in FIG. 25. It comprises a processing unit PR and at least one light module ML. In nonlimiting embodiments, the lighting device is a headlamp or a rear light.
The PR processing unit is suitable for:
- detect an observation position PosO1 of an observer O in a reference light module RP (illustrated function DET_POS (O, PosO1, RP));
- calculate the position of observation PosO2 of the eye of the observer O in an image reference frame RI (illustrated function DET_POS (O, PosO2, RI));
Said lighting device DISP is adapted to project said image Ip onto said projection surface S as a function of said observation position PosO2 of the observer O in the image reference frame RI, said image Ip being integrated into said light beam Fx of ML light module (illustrated function PROJ (Fx, Ip, S).
For the projection of said image Ip on said projection surface S, the processing unit PR is further adapted for:
- from a light intensity map CLUX of the light module ML comprising a plurality of intensity indicators pf, calculate a map of luminance CLUM on the projection surface S resulting in luminance points pi (illustrated function CALC_CLUM (CLUX, S, pi));
- calculate the position PosL2 of each luminance point pi in the image reference frame RI (illustrated function CALC_POS (pl, PosL2, O, RI));
- from its position PosL2 and from the observation position PosO2 of the observer O in said image reference frame RI, define the coordinates ply, plz of the projection plr of each luminance point pi on the image plane P1 of said image to project Ip (illustrated function DEF_PLR (plr, P1, PosL2, PosO2));
- if said projection plr belongs to said image to be projected Ip, define the coordinates lig, col of the corresponding pixel Pix (illustrated function DEF_PIX (pl (lig, col), ply, plz));
- for each projection plr of a luminance point pi belonging to said image to be projected Ip, correct the intensity value Vi of the corresponding intensity indicator pf as a function of the color Co of the corresponding pixel Pix (illustrated function MOD_PF (pf, Vi, Pix, Co));
For the projection of said image Ip on the projection surface S, the light module ML is adapted to project onto the projection surface S the light beam Fx with the intensity values VI corrected for the intensity indicators pf (illustrated function PROJ (ML, Fx, Vi, pf)).
It will be noted that the processing unit PR is integrated into the light module ML or is independent of said light module ML.
Of course the description of the patent application
PCT / EP2016 / 071596 is not limited to the embodiments described above.
Thus, in another nonlimiting embodiment, a type B goniophotometer can also be used, that is to say that the rotational movement around the vertical axis supports the rotational movement around the horizontal axis .
Thus, in another nonlimiting embodiment, the processing unit PR be offset relative to the lighting device DISP.
Thus, the step for calculating the observation position PosO2 in the image reference frame RI can be carried out before or at the same time as the calculation of the luminance position PosL2.
Thus, the motor vehicle V comprises one or more lighting devices DISP adapted to implement the MTH projection method described.
Thus, the patent application PCT / EP2016 / 071596 described has in particular the following advantages:
- it projects an image comprising at least one graphic symbol which makes it possible to improve the comfort and / or safety of an observer who is inside or outside the motor vehicle;
- It allows to project an image that is visible and understandable by a determined observer because said projection depends on the position of said observer; The same projection process is thus applied to project an image understandable by the driver or to project an image understandable by the pedestrian or even by a driver of a following vehicle for example; it makes it possible to distort the image to be projected Ip so that '' it can be understood by a determined observer. An anamorphosis of an image is thus created, said anamorphosis being dependent on the observation position of the observer O;
- the observation position of the observer in the image reference system is a function of the position and the rotation of said image to be projected. Thanks to the rotation which depends in particular on a site angle, when the latter is adjusted in a particular way, the observer has the impression of seeing an image in 3D;
- it makes it possible to integrate the information to be projected into the lighting beam Fx of the light module ML of the motor vehicle. It is not necessary to have an additional dedicated light source;
- thus, unlike a prior art which displays an image directly on the rear light window of the motor vehicle and which may appear too small at a certain distance, the invention allows an outside observer who is at a certain distance from said motor vehicle to see the image well since the latter is projected as a function of the position of the observer and on a projection surface which is not the window of a motor vehicle light. The dimensions of the image to be projected Ip are no longer limited to the small projection surface such as the fire window;
- it makes it possible to propose a solution which can be used for a recipient of information who can only see the front or the sides of the motor vehicle for example, unlike a solution which displays an image on the rear lights of the motor vehicle ;
- it makes it possible to propose another solution than an image display (s) on the rear lights of the motor vehicle;
- it makes it possible to propose another solution than an image projection (s) only dedicated (s) to the driver of the motor vehicle.
"
权利要求:
Claims (14)
[1" id="c-fr-0001]
1, - Method for projecting at least one image using a projection system (
[2" id="c-fr-0002]
2) of a motor vehicle comprising a detection device (4)
5 of an event, an estimation device (5) capable of estimating the time to reach said event, a processing unit (10) capable of generating a control signal, a projection device (12) capable of receiving the signal for controlling and projecting digital images, a storage unit (6) storing at least one image representative of a pictogram, characterized in
10 that said projection method comprises the following steps:
a) detection (30) of an event,
b) estimate (32) of the time to reach the event,
c) selection (34) of at least one image representing a pictogram characteristic of the event detected, and
D) establishment (36) of a sequence of images representing an animation of said pictogram, said sequence of images being clocked as a function of the time estimated during the estimation step b);
e) projection (46) of said sequence of images on the roadway.
2. The projection method according to claim 1, the projection system (12) further comprising an imager (18) intended to image a driver of the motor vehicle, and in which the projection method further comprises the following steps:
• determination (38) of the position of the driver in a frame of reference
25 predefined said projection reference frame (Rp), • calculation (40) of a transformation matrix (M) as a function of the position of the determined conductor, and in which said sequence of images is established from said at least one selected image, of said transformation matrix (M) and of
30 minus a predefined parameter (ε).
[3" id="c-fr-0003]
3, - projection method according to claim 2, wherein said animation comprises at least the pivoting of the pictogram with respect to a horizontal axis (A-A) and perpendicular to the direction of movement of the vehicle.
[4" id="c-fr-0004]
4, - projection method according to any one of claims 2 and
3, in which the projection method comprises a step of adding (44) at least one shadow zone on said at least one selected image so that the pictogram is perceived in relief by said conductor.
[5" id="c-fr-0005]
5, - projection method according to any one of claims 1 and
4, in which said animation represents an enlargement movement of the pictogram.
[6" id="c-fr-0006]
6, - projection method according to any one of claims 1 to
5, in which said animation represents a movement of movement of the pictogram.
[7" id="c-fr-0007]
7, - projection method according to any one of claims 1 to
6, wherein said animation comprises over-intensification or under-intensification of at least part of an image of the sequence of images.
[8" id="c-fr-0008]
8, - projection method according to any one of claims 1 to
7, in which the event is an event involving the arrival of a fork, a dangerous turn, a speed bump and a motorway exit.
[9" id="c-fr-0009]
9, - projection method according to any one of claims 1 to 7, wherein the detection device is a device for geographic location of the vehicle and wherein the event is an indication of the route to follow to achieve a route selected by the driver.
[10" id="c-fr-0010]
10, - projection method according to any one of claims 1 to 9, which further comprises a step of capturing an image of the car driver and wherein the step of determining the position of the driver in a predefined frame called projection reference frame (Rp) is implemented from the captured image.
[11" id="c-fr-0011]
11, - Projection system (2) of at least one image of a motor vehicle, said projection system (2) comprising:
- a device (4) for detecting an event,
- a device (5) for estimating the time to reach the event,
- a storage unit (6) capable of storing at least one image representing a pictogram characteristic of an event;
- a processing unit (10) suitable for selecting in the storage unit (6) at least one image representing a pictogram characteristic of the event detected, the processing unit (10) being shaped to establish a sequence of images representing an animation of said pictogram, said sequence of images being clocked as a function of the time estimated by the estimation device; and
- a projection device (12) capable of projecting said sequence of images onto the roadway.
[12" id="c-fr-0012]
12, - projection system (2) according to claim 11, wherein the processing unit (10) is capable of determining the position of the conductor in a predefined reference frame called projection reference frame (Rp) from the at least one captured image, the processing unit (10) being able to calculate a transformation matrix (M) as a function of the position of the determined conductor, the processing unit being able to establish said sequence of images from said at at least one selected image, of said transformation matrix and at least one predefined parameter.
[13" id="c-fr-0013]
13, - projection system (2) according to any one of claims 11 and 12, wherein the projection device (12) comprises a light source (16) capable of emitting a light beam, an imaging device (18) capable imaging the sequence of images and a projector (20) capable of projecting the sequence of images onto the roadway.
[14" id="c-fr-0014]
14, - projection system (2) according to any one of claims 11 and 13, comprising an imager capable of capturing the driver and in which the processing unit (10) is capable of finding the position of the
5 conductor on the captured image and define the transformation matrix (M) from the determined conductor position.
1/14
NOT
类似技术:
公开号 | 公开日 | 专利标题
EP3300942B1|2021-11-17|Method for projecting images by a projection system of a motor vehicle, and associated projection system
WO2017046105A1|2017-03-23|Projection method for a motor vehicle, for projecting an image onto a projection surface
FR2864311A1|2005-06-24|Display system for vehicle e.g. automobile, has extraction unit extracting position on image of scene from red light element of recognized object, and windshield having display zone in which display image is superposed on scene
EP3705335A1|2020-09-09|Lighting device for vehicle with driver assistance information display
EP3306592B1|2019-03-06|Method for projecting an image by a projection system of a motor vehicle, and associated projection system
WO2011092386A1|2011-08-04|Device for displaying information on the windscreen of an automobile
FR2965765A1|2012-04-13|METHOD AND DEVICE FOR FORMING AN IMAGE OF AN OBJECT IN THE ENVIRONMENT OF A VEHICLE
FR3034877A1|2016-10-14|DISPLAY SYSTEM FOR VEHICLE
FR2938660A1|2010-05-21|CONTROL METHOD AND APPARATUS FOR DETERMINING THE PHOTOMETRIC PARAMETERS OF THE PROJECTION OF A SIGN
FR3055431B1|2019-08-02|DEVICE FOR PROJECTING A PIXELIZED IMAGE
FR3056773A1|2018-03-30|DEVICE FOR AIDING THE DRIVING OF A MOTOR VEHICLE
FR2942064A1|2010-08-13|Method for alerting driver of motor vehicle e.g. bus, during event occurred on back side of vehicle, involves displaying pictogram in form of contour of visible part of reflection of vehicle on glass of rear-view mirrors
FR3011090A1|2015-03-27|DATA DISPLAY LENSES HAVING AN ANTI-GLARE SCREEN
FR3053805A1|2018-01-12|IMAGE GENERATING DEVICE FOR HEAD-UP DISPLAY AND METHOD FOR CONTROLLING SUCH A DEVICE
FR2926520A1|2009-07-24|Driving assisting system for automobile, has lighting device with light source coupled to direction control mechanism that controls light beam emitted by light source to sweep windscreen and light preset locations under preset angle
FR3060775A1|2018-06-22|METHOD FOR DETERMINING A DISPLAY AREA OF AN ELEMENT IN A DISPLAY DEVICE
FR3056501A1|2018-03-30|MOTOR VEHICLE IDENTIFICATION ASSISTANCE SYSTEM AND METHOD FOR IMPLEMENTING THE SAME
FR2893573A1|2007-05-25|Frontal obstacle e.g. pedestrian crossing road, detecting method for motor vehicle, involves comparing reference image and image, projected by illuminator device and visualized by camera, to find non-deformed zones assimilated to obstacles
FR3054889B1|2019-11-01|VISUAL DRIVING AIDS SYSTEM
EP3343531B1|2021-07-21|System for communicating information to a user in the vicinity of a motor vehicle
FR3073052B1|2019-09-27|HEAD-HIGH DISPLAY DEVICE FOR VEHICLE
FR3056772B1|2019-10-11|DRIVING ASSISTANCE DEVICE FOR A MOTOR VEHICLE
EP3020036A1|2016-05-18|Control system and method for a device for generating scanned images, image generating device and display including such a system
WO2020007624A1|2020-01-09|Panoramic rear-view device using head-up display cameras
FR3075141A1|2019-06-21|VEHICLE SEAT ASSEMBLY WITH DRIVING AID FUNCTIONS
同族专利:
公开号 | 公开日
US20180090011A1|2018-03-29|
EP3300942B1|2021-11-17|
US10043395B2|2018-08-07|
FR3056775B1|2021-08-20|
EP3300942A1|2018-04-04|
CN107878301A|2018-04-06|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
GB2482951A|2010-08-18|2012-02-22|Gm Global Tech Operations Inc|Vehicle with projector to display information about driving conditions|
EP2689966A1|2012-07-26|2014-01-29|Cloudcar, Inc.|Vehicle content projection|
US20140204201A1|2013-01-21|2014-07-24|Devin L. Norman|External Vehicle Projection System|
EP2896937A1|2014-01-21|2015-07-22|Harman International Industries, Incorporated|Roadway projection system|
US10118537B2|2015-04-10|2018-11-06|Maxell, Ltd.|Image projection apparatus|
US10474907B2|2016-03-10|2019-11-12|Panasonic Intellectual Property Corporation Of America|Apparatus that presents result of recognition of recognition target|US10118537B2|2015-04-10|2018-11-06|Maxell, Ltd.|Image projection apparatus|
CN108349429B|2015-10-27|2021-03-23|株式会社小糸制作所|Lighting device for vehicle, vehicle system and vehicle|
JP6554131B2|2017-03-15|2019-07-31|株式会社Subaru|Vehicle display system and method for controlling vehicle display system|
CN107218951A|2017-04-21|2017-09-29|北京摩拜科技有限公司|Navigation instruction reminding method, suggestion device, vehicle and navigator|
JP6981174B2|2017-10-25|2021-12-15|トヨタ自動車株式会社|Vehicle headlight device|
DE102017223439B4|2017-12-20|2019-08-08|Audi Ag|Warning device against dangerous situations for a motor vehicle|
CN108638958B|2018-06-08|2021-06-08|苏州佳世达电通有限公司|Warning method and warning device applying same|
JP6734898B2|2018-09-28|2020-08-05|本田技研工業株式会社|Control device, control method and program|
KR20200064182A|2018-11-16|2020-06-08|현대모비스 주식회사|Control system of autonomous vehicle and control method thereof|
DE102018132392A1|2018-12-17|2020-06-18|Bayerische Motoren Werke Aktiengesellschaft|Lighting device for a motor vehicle|
DE102018132391A1|2018-12-17|2020-06-18|Bayerische Motoren Werke Aktiengesellschaft|Lighting device for a motor vehicle|
FR3090907A1|2018-12-21|2020-06-26|Valeo Vision|Method for controlling a pixelated projection module for a motor vehicle|
US10823358B1|2019-12-19|2020-11-03|Valeo Vision|Device and method of directing a light via rotating prisms|
法律状态:
2017-09-29| PLFP| Fee payment|Year of fee payment: 2 |
2018-03-30| PLSC| Publication of the preliminary search report|Effective date: 20180330 |
2018-09-28| PLFP| Fee payment|Year of fee payment: 3 |
2019-09-30| PLFP| Fee payment|Year of fee payment: 4 |
2020-09-30| PLFP| Fee payment|Year of fee payment: 5 |
2021-09-30| PLFP| Fee payment|Year of fee payment: 6 |
优先权:
申请号 | 申请日 | 专利标题
FR1659298A|FR3056775B1|2016-09-29|2016-09-29|PROCESS FOR PROJECTING IMAGES BY A PROJECTION SYSTEM OF A MOTOR VEHICLE, AND ASSOCIATED PROJECTION SYSTEM|
FR1659298|2016-09-29|FR1659298A| FR3056775B1|2016-09-29|2016-09-29|PROCESS FOR PROJECTING IMAGES BY A PROJECTION SYSTEM OF A MOTOR VEHICLE, AND ASSOCIATED PROJECTION SYSTEM|
EP17192243.8A| EP3300942B1|2016-09-29|2017-09-20|Method for projecting images by a projection system of a motor vehicle, and associated projection system|
US15/720,252| US10043395B2|2016-09-29|2017-09-29|Method for projecting images by a projection system of a motor vehicle, and associated projection system|
CN201710914734.4A| CN107878301A|2016-09-29|2017-09-29|Method and associated optical projection system for the projection system projects image by motor vehicles|
[返回顶部]