![]() PROJECTION METHOD FOR A MOTOR VEHICLE OF AN IMAGE ON A PROJECTION SURFACE
专利摘要:
The present invention relates to a projection method (MTH) for a motor vehicle (V) of at least one image (Ip) on a projection surface (S) by means of a light module (ML) adapted to project a light beam (Fx), wherein said projection method (MTH) comprises the steps of: - detecting an observation position (PosO1) of an observer (O) in a light module (RP) repository; calculating the observation position (PosO2) of the observer (O) in an image reference frame (RI); projecting said image (Ip) on said projection surface (S) according to said observation position (PosO2) of the observer (O) in said image reference (RI), said image (Ip) being integrated in said light beam (Fx) of the light module (ML). 公开号:FR3041110A1 申请号:FR1558592 申请日:2015-09-14 公开日:2017-03-17 发明作者:Pierre Albou;Stephan Sommerschuh;Idrissi Hafid El;Xavier Morel 申请人:Valeo Vision SA; IPC主号:
专利说明:
PROJECTION METHOD FOR A MOTOR VEHICLE OF AN IMAGE ON A PROJECTION SURFACE TECHNICAL FIELD OF THE INVENTION The present invention relates to a projection method for a motor vehicle of at least one image on a projection surface by means of a light module. It finds a particular but non-limiting application in lighting devices, such as motor vehicle headlamps. BACKGROUND OF THE INVENTION A projection method for a motor vehicle of at least one image known to those skilled in the art comprises projecting onto a projection surface, such as the road, a graphical symbol comprehensible for the driver of the motor vehicle in order to help in his conduct. For example, the graphic symbol may be the limit speed not to be exceeded on the road, or the distance to which it is located from another motor vehicle. A disadvantage of this state of the art is that this method applies to the driver only. In this context, the present invention aims to propose a motor vehicle projection method of at least one image on a projection surface by means of a light module that applies regardless of the observation position of the observer compared to the motor vehicle and which thus provides an image that is understandable by an observer outside the motor vehicle or is understandable by an observer located inside the motor vehicle. GENERAL DESCRIPTION OF THE INVENTION To this end the invention proposes a projection method for a motor vehicle of at least one image on a projection surface by means of a light module adapted to project a light beam, wherein said method of projection comprises the steps of: detecting an observation position of an observer in a light module repository; - calculate the observation position of the observer in an image frame; projecting said image on said projection surface according to said observation position of the observer in said image reference frame, said image being integrated in said light beam of the light module. Thus, as will be seen in detail below, the projection of the image to be projected is dependent on the observation position of an observer and is introduced into the light beam of the light module. In this way, it is thus made visible and comprehensible by said observer, whether the latter is located inside the passenger compartment of the motor vehicle or outside the motor vehicle. According to non-limiting embodiments, the projection method may further comprise one or more additional characteristics among the following: According to a non-limiting embodiment, the projection of said image comprises the substeps of: from a light intensity map of the light beam of the light module comprising a plurality of intensity indicators, calculating a cartography luminance on the projection surface resulting in luminance points; calculate the position of each luminance point in the image reference; from its position and from the observation position of the observer in said image frame, define the coordinates of the projection of each luminance point on the image plane of said image to be projected; if said projection belongs to said image to be projected, define coordinates of the corresponding pixel; for each projection of a luminance point belonging to said image to be projected, correcting the intensity value of the corresponding intensity indicator as a function of the color of the corresponding pixel. According to a nonlimiting embodiment, the calculation of the luminance map on the projection surface comprises the steps of: - making a first calculation of the position of said intensity indicators on the projection surface resulting in impact points ; - Perform a second calculation of an illumination mapping of said points of impact; - Perform a third calculation of the luminance map of said points of impact from the illumination mapping resulting in said luminance points. According to a non-limiting embodiment, said first calculation is based on: the position of the light module; and - the direction of said intensity indicators. According to a non-limiting embodiment, said second calculation is based on: the calculated position of the intensity indicators; the luminous intensity of said intensity indicators; and the distance between the light module and the said points of impact. According to a non-limiting embodiment, the image to be projected is calculated as a function of properties of the projection surface. According to a non-limiting embodiment, said third calculation is based on: the illumination of said points of impact; a position vector between the position of an impact point of the illumination map and the observation position of the observer; and a light scattering function. According to a nonlimiting embodiment, the computation of the observation position of an observer and the position of a luminance point in the image reference is based on at least one transformation matrix of the light module repository towards said repository. image which takes into account at least one of the following parameters: the position of the image to be projected in the light module repository; - the rotation of the image to be projected. According to a non-limiting embodiment, the definition of the coordinates of a projection of a luminance point comprises the substeps of: calculating the point of intersection between: the line passing through the observation position in said luminance point; image reference frame of the observer and by the position in said image reference of said luminance point; and the image plane of the image to be projected; determining the coordinates of said intersection point from dimensions of said image to be projected. According to a non-limiting embodiment, the projection surface is considered a Lambertian diffuser. According to a non-limiting embodiment, the projection surface is a floor or a wall. According to a non-limiting embodiment, the observer is outside the motor vehicle or is located in said motor vehicle. According to a non-limiting embodiment, the image appears distorted in the light module repository and not deformed in the repository. According to a non-limiting embodiment, the calculation of the observation position of the observer in the image reference frame is a function of the position and the rotation of the image to be projected in the light module repository, said rotation being a function of a site angle greater than -90 ° and less than or equal to 0 °; According to a non-limiting embodiment, the calculation of the position of each luminance point in the image reference frame is a function of the position and the rotation of the image to be projected in the light module reference frame, said rotation being a function of a angle greater than -90 ° and less than or equal to 0 °; According to a non-limiting embodiment, the site angle is equal to 0. According to a non-limiting embodiment, the site angle is substantially equal to minus 35 °. According to a non-limiting embodiment, the projection of each luminance point on the image plane of said image to be projected is central, so as to produce a conical perspective effect. According to one nonlimiting embodiment, the correction of the intensity value of the intensity indicator is carried out as follows: Vi = a.ViO * Co / 255, with Vi the corrected intensity value, ViO the initial intensity value of the intensity indicator of the light module, Co the color of the corresponding pixel, σ a maximum overcurrent factor . According to one nonlimiting embodiment, the correction of the intensity value of the intensity indicator is carried out as follows: Vi = (p.Co, with Vi the corrected intensity value, φ a luminance coefficient, Co the color of the corresponding pixel. According to a non-limiting embodiment, the observation position of an observer is calculated by means of a camera. According to a non-limiting embodiment, the light intensity map is stored in a memory. According to a non-limiting embodiment, the projection of said image on the projection surface further comprises the sub-step of projecting by means of said light module on the projection surface the light beam with the corrected intensity values of the indicator lamps. 'intensity. It is also proposed a lighting device for a motor vehicle comprising a processing unit and a light module adapted to project a light beam, in which: said processing unit is adapted to: detect an observation position of a observer in a luminous module repository; - calculate the observation position of the observer in an image frame; said lighting device is adapted to project said image onto said projection surface as a function of said observation position of the observer in the image reference frame, said image being integrated into said light beam of the light module. According to a non-limiting embodiment, for the projection of said image onto the projection surface, said processing unit is furthermore adapted for: from a light intensity map of the light beam of the light module comprising a plurality intensity indicators, calculating a luminance map on the projection surface resulting in luminance points; calculate the position of each luminance point in the image reference; from its position and from the observation position of the observer in said image frame, define the coordinates of the projection of each luminance point on the image plane of said image to be projected; if said projection belongs to said image to be projected, define coordinates of the corresponding pixel; for each projection of a luminance point belonging to said image to be projected, correcting the intensity value of the corresponding intensity indicator as a function of the color of the corresponding pixel. According to a non-limiting embodiment, for the projection of said image on the projection surface, the light module is adapted to project on the projection surface the light beam with intensity values corrected intensity indicators. According to a non-limiting embodiment, the lighting device is a projector or a rear light. According to a non-limiting embodiment, the processing unit is integrated in the light module. It is also proposed a motor vehicle comprising a lighting device according to the preceding characteristics. BRIEF DESCRIPTION OF THE FIGURES The invention and its various applications will be better understood on reading the description which follows and on examining the figures that accompany it. FIG. 1 represents a flowchart of the steps of the method of projecting at least one image onto a projection surface according to a non-limiting embodiment of the invention; - Figure 2 shows a motor vehicle comprising a lighting device adapted to implement the projection method of Figure 1 according to a non-limiting embodiment; - Figure 3 shows a light intensity map established according to a step of the projection method of Figure 1 according to a non-limiting embodiment; FIG. 4 shows a projector which integrates a light module and the direction of a light beam of a light module of said projector, said light module being adapted to perform at least one step of the projection method of FIG. 1; FIG. 5 represents a flowchart illustrating sub-steps of a step of establishing a luminance map of the projection method of FIG. 1 according to a nonlimiting embodiment; - Figure 6 shows the projector of Figure 4 and a point of impact of the light beam on the ground; - Figure 7 shows the projector of Figure 6 and the illumination of the point of impact; FIG. 8 indicates the site angle and the azimuth angle taken into account in a step of computing the observation position of an observer of the projection method of FIG. 1; FIG. 9 schematically represents an impact point, an observation position of an observer outside the motor vehicle in an image reference frame and an image to be projected by the projection method of FIG. 1; FIG. 10 illustrates an image projected according to the projection method of FIG. 1, an image that is seen from the point of view of the driver of said motor vehicle but which is only comprehensible to an observer outside the motor vehicle; FIG. 11 illustrates an image projected according to the projection method of FIG. 1, which image is seen from the point of view of a rear passenger of said motor vehicle but which is only comprehensible to an observer outside the motor vehicle; FIG. 12 illustrates an image projected according to the projection method of FIG. 1, an image which is seen from the point of view of said observer outside the motor vehicle and which is comprehensible to said observer outside the motor vehicle; FIG. 13 represents a flowchart illustrating sub-steps of a step of defining the coordinates of a projection of a luminance point of the projection method of FIG. 1 according to a nonlimiting embodiment; FIG. 14 diagrammatically represents the point of impact, the observation position of the observer outside the motor vehicle and the image to be projected of FIG. 9 by the projection method of FIG. 1 and the coordinates of the intersection between the point of impact and the image to be projected; - Figure 15 schematically shows the point of impact, the observation position of the observer outside the motor vehicle and the image to be projected of Figure 14 normalized; and FIG. 16 schematically represents pixels of the image to be projected of FIG. 14; and FIG. 17 illustrates a lighting device adapted to implement the projection method of FIG. 1. DESCRIPTION OF EMBODIMENTS OF THE INVENTION Identical elements, structure or function, appearing in different figures retain, unless otherwise specified, the same references. The projection method MTH for a motor vehicle of at least one image on a projection surface by means of a light module ML according to the invention is described with reference to FIGS. 1 to 16. By motor vehicle, we mean any type of motorized vehicle. As illustrated in FIG. 1, the method MTH comprises the steps of: detecting a PosOI observation position of an observer O in a light module repository RP (illustrated step DET_P0S (0, PosOI, RP)); calculating the observation position Pos02 of the observer O in an image reference frame RI (step illustrated DET_P0S (0, Pos02, RI)); projecting said image Ip onto said projection surface S as a function of said observation position Pos02 of the observer O in said image reference frame R1, said image Ip being integrated into said light beam Fx of the light module ML (illustrated step PROJ ( Fx, Ip, S)). As shown in FIG. 1, the projection of said image Ip comprises the substeps of: - 3a) from a light intensity card CLUX of the light module ML comprising a plurality of intensity indicators pf , calculating a luminance map CLUM on the projection surface S resulting in luminance points pl (illustrated step CALC_CLUM (CLUX, S, pl)); - 3b) calculating the position PosL2 of each luminance point p1 in the image repository R1 (illustrated step CALC_POS (pl, PosL2, 0, RI)); 3c) from its position PosL2 and from the observation position Pos02 of the observer 0 in said image reference frame RI, define the coordinates ply, plz of the projection plr of each luminance point pl in the image plane P1 of said image to be projected Ip (step illustrated DEF_PLR (plr, P1, PosL2, Pos02)); - 3d) if said projection plr belongs to said image to project Ip, to define lig, col coordinates of the pixel Pix corresponding (step illustrated DEF_PIX (pl (lig, col), ply, plz); - 3e) for each projection plr d a luminance point p1 belonging to said image to be projected Ip, correcting the intensity value Vi of the corresponding intensity indicator pf as a function of the color Co of the corresponding pixel Pix (illustrated step MOD_PF (pf, Vi, Pix , Co)). It should be noted that the first step 3a in particular, as well as the step 3b in particular, can be performed before the iterations of the following steps. More generally, the steps described are not necessarily performed sequentially, i.e. in the same iteration loop, but can be the subject of different iterations, with different iteration frequencies. The projection step of the image Ip further comprises a sub-step 3f) of projecting on the projection surface S the light beam Fx with the intensity values Vi corrected by the intensity indicators pf (step illustrated in FIG. Figure 1 PROJ (ML, Fx, Vi, pf)). The MTH projection method is adapted to project one or more lp images at the same time. In the remainder of the description, the projection of a single image is taken as a non-limiting example. It should be noted that the projection can be done in front of the motor vehicle V, at the rear or on its sides. The light module ML makes it possible to produce a light beam Fx, said light beam Fx comprising a plurality of light beams Rx which follow different directions. The ML light module makes it possible to modify the intensity value Vi of each intensity indicator pf, it is therefore a digitized light module. As described below, the image to be projected lp into the light beam Fx of the light module ML is thus integrated. Note that the CLUX light intensity map is discretized to be digitally exploitable. The light module ML is considered as a point light source from which the space around said light source is discretized. Thus, an intensity indicator pf is a point in the space illuminated by the light module ML which has a certain direction dir1 and a given intensity value Vi provided by the light module ML in said direction dir1. Direction dir1 is given by two angles Θ and δ (described later). In a non-limiting embodiment, the projection surface S is the floor (referenced S1) or a wall (referenced S2). The image that will be projected lp on the floor or the wall is thus a 2D image. In a non-limiting embodiment illustrated in FIG. 2, a lighting device DISP of the motor vehicle V comprises at least one light module ML and is adapted to implement the MTH projection method. In the non-limiting example illustrated, the lighting device is a projector. As will be seen below, the observation position of the observer O is taken into account for the projection of the image to project Ip. For this purpose, the image to be projected Ip will be distorted so that it is understandable by the observer in question, whether it is the driver or a front or rear passenger of the motor vehicle or an observer outside the motor vehicle. We thus place ourselves from the point of view of the observer O for whom we want to project the image Ip. From the observer's point of view, the Ip image will not be distorted. From a different point of view of said observer, the image Ip will be distorted. In non-limiting exemplary embodiments, an observer O outside the vehicle is a pedestrian, a driver of another motor vehicle, a cyclist, a biker, etc. It can be at the front, rear or on one side of the V vehicle. In a non-limiting embodiment, the projected image Ip comprises at least one graphic symbol. This graphic symbol will make it possible to improve the comfort and / or the safety of the observer O. In a non-limiting example, if the observer O is the driver of the motor vehicle, the graphic symbol may represent the speed limit not to overtaking on the road, a STOP graphic symbol when the motor vehicle recedes and an obstacle (pedestrian, wall etc.) is too close to the motor vehicle, an arrow that helps when the motor vehicle is about to turn on a road etc. In a non-limiting example, if the observer O is outside the motor vehicle such as a pedestrian or a cyclist, the graphic symbol may be a STOP signal to indicate that it must not cross in front of the motor vehicle because the latter will restart. In a non-limiting example, if the observer O is outside the motor vehicle such as a motor vehicle follower, the graphic symbol may be a STOP signal when the motor vehicle considered brakes so that the driver of the follower vehicle brakes in turn. In another non-limiting example, if the observer O is outside the motor vehicle and is a motor vehicle that protrudes to the side, the graphic symbol may be a warning symbol to indicate to said motor vehicle to fall back because another vehicle car arrives in front. As illustrated in Figure 2, the projected image Ip is a STOP symbol. It is oriented on the projection surface S, here the ground in the non-limiting example illustrated, so that the observer O can see and understand this STOP symbol. In the nonlimiting example illustrated, the projection is at the front of the motor vehicle V and the observer O is outside the motor vehicle V. The different steps of the MTH projection process are described in detail below. • D_Detection. _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _de_ Y pb_s_e_rva_teu_r_ in _ féférentjei moduje Jum ineux.RP To detect the observation position PosOI of the observer O in the RP luminous module repository, it is necessary to detect the position of the observer O itself in the RP light module repository. For this purpose, in a non-limiting example, a camera (not shown) is used. It is adapted to detect and calculate the position of an observer O which is outside the motor vehicle V. In non-limiting embodiments, the camera is replaced by a radar, or a lidar. For an observer O who is inside the motor vehicle (driver or passenger), for example, reference observation positions are considered. Thus, in a non-limiting example, it is considered that the driver's eye is in the position PosOI (1.5; -0.5; 1) (expressed in meters) of the light module ML in the case of a vehicle. automobile that is a car. Of course, if the motor vehicle is a truck, the position of the eye relative to the ML light module is different. For an outside observer, from the position of said observer O, one can deduce its PosOI observation position which corresponds to the position of his eye. For example, one places the position of one's eye at about 1.5 meters from the ground. Such a detection of the position of the observer being known to those skilled in the art, it is not described in detail here. _2) _ _CaJul. _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _. The observation position PosOI of the observer O has been determined previously according to the reference light module RP. It will be used for the landmark change described below. This step performs a marker change. In fact, the light module repository RP (defined by the axes pjx, pjy, pjz) is transferred to the image reference frame RI (defined by the axes Ix, ly, Iz) of the image to be projected Ip. The calculation of the observation position Pos02 of the observer O in the image repository RI is based on at least one transformation matrix M of the light module repository RP to said image repository RI. In a non-limiting embodiment, the Pos02 position is of the form: In a non-limiting embodiment, said at least one transformation matrix M takes into account at least one of the following parameters: the position Pos1p of the image to be projected Ip in the light module reference frame RP; Rotlp rotation of the image to be projected Ip in the RP light module repository; - the scale of the image to project Ip The position Poslp of the image to be projected Ip is deduced from the reference light module RP according to a translation along the three axes pjx, pjy, pjz of said light module reference RP. In a non-limiting embodiment, the transformation matrix M is of the form: where a, e and i are the terms of affinity; b, c, d, f, g and h the terms of rotation; and t, u and v are the translation terms. The affinity terms a, e and i make it possible to enlarge or shrink the image Ip, for example the overall size (homothety) is increased by 50% or reduced by 20%. It is also possible to apply the enlargement or shrinkage factors in only one of the dimensions (non-homothetic). In this way, depending on the position Pos02 of the eye of the observer O, it is possible to project an image that appears to the observer O more or less large. It should be noted that Rotlp rotation depends on three angles which are the following: β: azimuth (which indicates whether the image to be projected is to the right or to the left of the observer, for example when the observer is looking to the right or to the right left); - Ω: devers (which indicates the inclination of the image to be projected Ip, for example when the observer inclines the head to the side, which amounts to leaning the image Ip); - ε: site (which indicates the effect that one wants to give to the graphic symbol of the image Ip). FIG. 8 illustrates the site and azimuth angles and the plane P1 of the image to be projected Ip. We thus have Pos02 = M * Pos01. PosOI is the observation position of the observer O used for the projection of the image Ip in the RP light module repository. Pos02 is the observation position of the observer O used for the projection of the image Ip in the image reference frame RI. Thus, the position and the rotation of the image to be projected Ip are adapted according to the observer O. In this way, the image to be projected Ip will be comprehensible by the observer O. This gives an affine deformation of the image of the point of view that one desires, called anamorphosis. Thus, for the eye of a driver of a car, the projected image Ip is not distorted. Similarly, for the eye of a driver of a truck, although it is positioned well above the RP module light module, the projected image Ip is also not distorted. Finally, for an outside observer, the projected image Ip is not distorted either. It should be noted that the projected image Ip can thus be clearly visible to the observer since its projection depends on the observation position of the observer O and that one can modulate his scale as one wants. Thus, even if it is far from the motor vehicle, the observer O can always understand and see the graphic symbol (s) of the projected image Ip. • 3) _ QIQ QiQ n _de J Ί mag e_ lp_ on Projection_surfac e C_e_§taB§_QQ! TJ2CeQdJes_sous-steps_syLV_ant_e_s_ ^ • 3_a) _Calculating_a_Luminance_articleography ÇJLUM In a non-limiting embodiment, the CLUX light intensity map is stored in a memory. It will have been previously established during the design of the product, using a goniophotometer (not shown). The goniophotometer is for example of type A. The CLUX light intensity map gives the intensity indicators pf of the light module ML considered as a point light source. The direction dir1 of a light ray Rx starting from the light module ML is expressed as a function of two angles Θ and δ and is given by the following formula: With δ the vertical rotation V of the goniophotometer; and Θ the horizontal rotation H of the goniophotometer. The CLUX light intensity map thus comprises a plurality of intensity indicators pf whose direction dir1 is given by the above formula, with Θ the horizontal angle of the intensity indicator pf, and δ 1 vertical angle of the intensity indicator pf. The CLUX light intensity map is shown in FIG. 3. It can be seen an intensity indicator pf with polar coordinates δ = 0V, θ = 0Η. The CLUX light intensity map thus makes it possible to determine an intensity Ι (θ, δ) for a given direction. We thus have: where M and N are the number of discretization points (or intensity indicators) of the light beam Fx along the directions (respectively) vertical and horizontal. An intensity indicator pf is therefore defined by its direction dir1 and its intensity Ι (θ, δ). FIG. 4 illustrates a lighting device DISP comprising a light module ML with the direction of a light beam Fx. The calculation of the luminance mapping CLUM on the projection surface S comprises the following substeps illustrated in FIG. 5. - i) a first calculation of the POSpf position of said intensity indicators pf on the projection surface S resulting in points of impact pi (step illustrated CALC_POSF (pf, POSpf, pi)); ii) a second calculation of a CECL illumination map of said points of impact pi (step illustrated CALC_CECL (pi, CECL)); iii) a third calculation of the luminance mapping CLUM of said impact points pi from the illumination mapping CECL resulting in said luminance points pl (illustrated step CALC_CLUM (pi, CECL)). The different sub-steps are detailed below. It will be noted that the calculations below are made as a function of the projection surface S (soil S1 or wall S2). o sub-step i) The first calculation is based on: the position POSpj of the light module ML in the Cartesian coordinate system x, y, z; and the direction dir1 of said intensity indicators pf described above. For the soil S1, the POSpfl position of the intensity indicator pf on the ground is thus obtained in the Cartesian coordinate system x, y, z with the following formula. With POSpj.z, the value z of the position of the light module ML (height of the light module above ground) and dir1 .z, the value z of the director vector of the light beam Rx. For the wall S2, the position POSpf2 of the intensity indicator pf on the wall is thus obtained in the Cartesian coordinate system x, y, z with the following formula. With - dir1 .x, the value x of the director vector of the light ray Rx; - D, the distance between the ML light module and the wall. In a non-limiting example, D is equal to 25 meters. This gives a point of impact pi (position POSpfl or POSpf2) on the ground S1 or on the wall S2. FIG. 6 illustrates a nonlimiting example of impact point pi on a projection surface S which is the ground S1. o sub-step ii) Once the point of impact pi on the ground S1 or on the wall S2 determined, one calculates the illumination E of this point of impact pi starting from the intensity Ι (θ, δ) of the indicator of intensity pf determined previously. For the soil S1, the irradiance ER of the point of impact pi on the ground is thus obtained with the following formula. With distl, the distance between the point of impact pi and the light module ML. For the wall S2, EM illumination is obtained from the point of impact pi on the wall with the following formula. With distl, the distance between the point of impact pi and the light module ML. FIG. 7 illustrates the illumination E (delimited by a dashed circle) of a point of impact pi on a projection surface S which is the ground S1. o sub-step iii) The third calculation is based on: the illumination E of said points of impact pi; a position vector Roeil / Moeil between the position of an impact point pi of the illumination mapping CECL and the observation position PosOI of the observer O (in the luminous module reference frame RP); and a light scattering function d. d is a known function which makes it possible to calculate the scattering of light by the projection surface S. It will be noted that it varies according to the nature of the projection surface S. For example, the function d is different if the surface is asphalt, concrete, tar, cobblestones, etc. For the ground S1, one thus obtains the luminance LR of the point of impact pi on the ground with the following formula. With the z value of the standardized Roeil vector. For the wall S2, one thus obtains the luminance LM of the point of impact pi on the wall with the following formula. With the x value of the standardized Moeil vector. In a non-limiting embodiment, it is assumed that the projection surface S emits uniformly in all directions. In this case, the diffusion parameter d does not depend on the angles δ and Θ. In a non-limiting embodiment, the projection surface S is considered a Lambertian diffuser (for example a gray body). We then have a constant luminance on the projection surface S proportional to the illumination E and in this case the diffusion function d is a cosine. In this case, l_R = a ^ ER because where a is the albedo of the material, and I_m = a / π Em In non-limiting examples, the albedo of asphalt is 7%, and that of concrete varies between 17% and 27%. • 3_b) _Calculating_the_Presents_of_P_oints_of_the_Repository_in_the_Re_Im_Repository The position PosL1 of a luminance point p1 has been determined previously according to the reference module RP. It will be used for the landmark change described below. In the same way as for the calculation of the observation position Pos02 of the observer O, this step performs a reference change. In fact, the light module repository RP (defined by the axes pjx, pjy, pjz) is transferred to the image reference frame RI (defined by the axes Ix, ly, Iz) of the image to be projected Ip. The calculation of the position PosL2 of a luminance point p1 in the image repository R1 is based on the said at least one transformation matrix M of the light module repository RP to the said image repository R1 (transformation matrix M described above). In a non-limiting embodiment, the PosL2 position is of the same shape as the position Pos02 previously described: It will be noted that the transformation matrix M has been described during the calculation of the observation position Pos02 of the observer O in the image reference frame RI. It is not detailed here again. We thus have PosL2 = M * PosL1. PosL1 is the position of the luminance point p1 in the RP luminous module repository. PosL2 is the position of the luminance point p1 in the image frame R1. FIG. 9 illustrates the image to be projected Ip as well as the image reference image RI. One can also see the luminance point pl and the observer's eye O (which corresponds to the observation position) with their respective positions PosL2 and Pos02 defined in the image reference RI. Note that although the projected Ip image on the floor or the wall is in 2D, (two dimensions), we can obtain a 3D effect by adjusting the site angle ε seen previously. The observer O (whether the driver, a passenger, or an outside observer) will see the picture in perspective. For this purpose, the site angle ε is greater than -90 °. In particular, it is greater than -90 ° and less than or equal to 0 °. The 3D effect is thus visible between 0 and up to -90 ° (not included). Note that at -90 ° IP image is plated on the ground and is therefore without 3D effect. Figures 10 to 12 illustrate a projected image Ip which is a pyramid. An observer O who is outside the motor vehicle, such as a pedestrian, is taken as a non-limiting example. The pyramid is visible from three particular points of view, the driver's point of view (figure 10), the viewpoint of a rear passenger (figure 11) and the pedestrian point of view (figure 12), but not is seen in 3D effect only from one point of view. In the non-limiting example illustrated, only the pedestrian will see the pyramid in 3D (as illustrated in Figure 12). From the point of view of the driver or the passenger, the pyramid appears distorted. In a non-limiting embodiment variant, the site angle ε is equal to 0. The observer O looks straight ahead. In this case, the observer O will see the image, namely here the pyramid, as if it were standing. In a non-limiting embodiment variant, the site angle ε is substantially equal to -35 °. This allows to have a 3D effect raised in the direction of the road. The plane P1 of the image Ip is thus perpendicular to the observation direction of the observer O. If the angle ε is different from -90 °, the pyramid will be visible in 3D but more or less inclined. • 3_c) _define_the_specialized_codes_, _pjz _of_the_project, As illustrated in FIG. 13, in a nonlimiting embodiment, the definition of the coordinates ply, plz of a projection plr of a luminance point p1 comprises the substeps of: i) calculating the point d intersection Int between (illustrated sub-step CALC_INT (Pos02, PosL2, P1)): - the line V (Pos02, PosL2) passing through the observation position Pos02 in said image reference frame RI of the observer O and by the position PosL2 in said image reference frame RI of said luminance point p1; and the image plane P1 of the image to be projected Ip. ii) determining the coordinates ply, plz of said intersection point Int from the dimensions L1, H1 of said image to be projected Ip (illustrated sub-step DEF_COORD (lnt, L1, H1). These two substeps are described below. o sub-step i) In the image reference frame RI, the intersection point Int between the line (eye, luminance point) and the image plane P1 is the point of the straight line (eye, luminance point) for which Ix = 0. We thus have: With - V (Pos02, posL2) the vector representing the line (eye, luminance point) in the image reference frame RI; - V (Pos02, posL2) .x the value x of the vector; Int the intersection point between the line (eye, pl) and the image to be projected Ip in the image reference frame RI. The point of intersection Int is thus the projection plr of the luminance point p1 on the image plane P1 of the image to be projected Ip; PosL2.x the value x of the position of the luminance point p1; - Pos02.x the value x of the observation position of the observer. Note that it is assumed that the observation position of the observer O is placed on the axis Ix. FIG. 14 illustrates the image to be projected Ip, the intersection point Int which corresponds to the projection plr of the luminance point p1 on said plane P1 and the vector V (pos02, posL2) (shown in dotted lines). It should be noted that the projection plr is of central type, so as to produce a conical perspective effect. Subsequently we use indifferently the term projection plr or central projection plr. o sub-step ii) The ply, plz coordinates of the central projection plr of the luminance point pl in the image frame RI correspond to the coordinates along the axis ly (vertical) and along the axis Iz (horizontal) of the position of the intersection point Int determined. previously. In a non-limiting embodiment, they are expressed in meters. We deduce the coordinates of this point in the reference of Figure 14 by the following formulas: With, - L1 the width of the image to be projected Ip (expressed in meters in a non-limiting example); H1 the height of the image to project Ip (expressed in meters in a non-limiting example); - Int.y the value y of the point of intersection; - Int.z the z value of the intersection point. FIG. 14 illustrates the definition of the ply and plz coordinates in meters in the RI image reference frame. Note that L1 and H1 are input parameters of the MTH projection method. This sub-step makes it possible to determine later whether the coordinates ply, plz belong to the image to be projected Ip (they must then be between 0 and 1) and therefore if the central projection plr of the luminance point pl belongs to the image to project Ip. For this purpose, in a non-limiting embodiment, the image to project Ip and the coordinates of the projection plr thus calculated are normalized. This simplifies the image membership test to project Ip. This gives a standard reference IX (vertical axis), IY (horizontal axis) as shown in Figure 15. The value of the ply, plz of the projection plr is now between 0 and 1. In the illustrated example, the ly and Iz axes which have become respectively the IX and -IY axes. This gives image dimensions H2, L2 between 0 and 1. FIG. 15 illustrates the definition of the ply and plz coordinates in unitless values in the image reference frame RI. It will be noted that the size (L1, H1) of the image to be projected Ip can be defined in this step 3c) or in the step with the transformation matrix M. Since the dimensions L1 and H1 and thus L2, H2, the position and the rotation of the image to be projected Ip are known (they are input parameters of the MTH projection method), it is easy to determine, via its coordinates ply, plz, if the projection pl belongs or not to the image to project Ip. • 3_d) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ P_ix_ço rresjDon d a_n_t The definition of the line (lig), column (col) coordinates of the pixel Pix is performed for each projection plr (luminance point pl) which belongs to the image to project Ip, namely which is located inside the rectangle L2 * H2 of the image to project Ip, which was verified in step 3c-ii). Thus, if the projection plr belongs to the image to be projected Ip, the coordinates of the corresponding Pix pixel are calculated. They are calculated as follows. Lig = - plz * L2 Col = ply * H2 With, - lig, the pixel line; - col, the column of the pixel; - L2 the width of the image to project Ip (this time expressed in pixels); H2 the height of the image to be projected Ip (this time expressed in pixels); - ply the coordinate of the projection plr along the axis IX; - plz the coordinate of the projection plr along the axis IY. • 3_e) _ _orreçtion_ of_the_game_j__ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _. _d_ 'Lntensjté _pf PP_r_resp_o_ndant With the coordinates lig, neck of pixel Pix, one can recover the value of its color Co in the image that one wants to project. In a non-limiting example, the value is between 0 and 255. It is thus possible to go from white to black by way of several shades of gray as illustrated in FIG. The intensity value Vi of the corresponding intensity indicator pf is then corrected. Note that this is possible because the ML light module is digitized. In a first non-limiting embodiment, the correction is carried out as follows: Vi = a.ViO * Co / 255. With: - ViO the initial intensity value of the intensity indicator pf of the light module, - Co the color of the corresponding Pix pixel; and - σ a maximum overcurrent factor. In a second non-limiting embodiment, the correction is carried out as follows: Vi = (p.Co, where φ is a luminance coefficient) This is used to replace the luminance, which allows the image to be displayed on a background independent of the basic light distribution. This step is performed for all the luminance points pl whose central projection plr belongs to the rectangle L2 * H2 of the image to be projected Ip. Thus, the light module ML can project on the projection surface S the light beam Fx comprising the light rays Rx with the intensity values Vi corrected intensity indicators (step 3f) illustrated in Figure 1 PROJ (ML, Fx , pf, Vi). This makes it possible to display the correct color Co for the intensity indicator considered. In this way, the image to be projected Ip is integrated in the light beam Fx of the light module ML (since it is produced by said light module ML itself) and is projected onto the projection surface S with the right colors. Thus, depending on the desired color Co of a pixel Pix, a determined correction factor is applied to the intensity value Vi of the corresponding intensity indicator pf. Thus, it is possible to obtain intensity indicators whose color does not depend on the luminous intensity of the light beam Fx itself. For example, the illustrated projected pyramid is of a homogeneous color. In the case of a light source independent of the ML light module that would project said pyramid superimposed on said light beam, this would not be the case. The pixels of the image would be more or less illuminated depending on the distribution of the light intensity of said light beam. Their color would thus vary according to the light intensity of said light beam. Moreover, the fact that the image to be projected Ip is integrated into said light beam Fx and not superimposed makes it possible to obtain a better contrast of the image on the projection surface S than in the case of a use of the image. 'an independent light source. In the case of an independent light source, the light beam also illuminates the projected image. The latter is therefore clearer in terms of colors. It will be noted that since the projection of the image to be projected Ip depends on the observation position of the observer O, it is thus continuously updated as a function of the observer's displacement O relative to the motor vehicle when the latter is outside the motor vehicle and according to the movement of the motor vehicle itself when the observer O is inside the motor vehicle. . In a non-limiting embodiment, the refresh rate of the calculations presented above is thus a function of the speed of movement of the observer relative to the motor vehicle for the case of an outside observer. As speed increases, the refresh rate increases. The lower the speed, the lower the refresh rate. In another non-limiting embodiment, the refresh rate of the calculations presented above is constant. In a non-limiting example, the frequency is one second. Thus, these calculations being performed in real time, it is not necessary to have a database with images of the same graphic symbol pre-loaded into memory corresponding to several observation positions imaginable of the observer by relative to the motor vehicle (when it is outside), or in the motor vehicle (when it is inside). The projection method MTH thus makes it possible to project one or more images Ip onto a projection surface S which is not only visible by an observer located inside or outside the motor vehicle but also comprehensible to him since the image projected Ip is oriented in the direction of gaze of said observer O. Note that in the case where the projection of several images Ip at the same time, the combination of the different images is calculated with the light beam Fx before projecting the overall result. In a non-limiting embodiment, the MTH projection method is implemented by a lighting device DISP for a motor vehicle V. In a non-limiting embodiment, the lighting device DISP allows the realization of a photometric function such as a dipped beam, high beam or taillight. Thus, the lighting device is at the front of the motor vehicle or at the rear. The lighting device DISP is illustrated in FIG. 17. It comprises a processing unit PR and at least one light module ML. In non-limiting embodiments, the lighting device is a projector or a rear light. The processing unit PR is adapted to: - detect a PosOI observation position of an observer O in a RP light module repository (function illustrated DET_P0S (0, PosOI, RP)); calculating the observation position Pos02 of the observer's eye O in an image reference frame RI (function illustrated DET_P0S (0, Pos02, RI)); Said lighting device DISP is adapted to project said image Ip on said projection surface S according to said observation position Pos02 of the observer O in the image reference frame RI, said image Ip being integrated into said light beam Fx of ML light module (illustrated function PROJ (Fx, Ip, S). For the projection of said image Ip on said projection surface S, the processing unit PR is furthermore adapted for: from a light intensity card CLUX of the light module ML comprising a plurality of indicator lamps intensity pf, calculating a luminance map CLUM on the projection surface S resulting in luminance points pl (illustrated function CALC_CLUM (CLUX, S, pi)); calculating the PosL2 position of each luminance point p1 in the image repository R1 (function illustrated CALC_POS (pl, PosL2, O, RI)); from its position PosL2 and from the observation position Pos02 of the observer O in said image reference frame RI, define the coordinates ply, plz of the projection plr of each luminance point p1 on the image plane P1 of said image to project Ip (function illustrated DEF_PLR (plr, P1, PosL2, Pos02)); if said projection plr belongs to said image to project Ip, to define lig, col coordinates of the corresponding pixel Pix (function illustrated DEF_PIX (pl (lig, col), ply, plz)); for each projection plr of a luminance point p1 belonging to said image to be projected Ip, to correct the intensity value Vi of the corresponding intensity indicator pf as a function of the color Co of the corresponding pixel Pix (function illustrated MOD_PF) (pf, Vi, Pix, Co)); For the projection of said image Ip on the projection surface S, the light module ML is adapted to project on the projection surface S the light beam Fx with the intensified intensity values VI corrected by the intensity indicators pf (illustrated function PROJ (ML, Fx, Vi, pf)). It will be noted that the processing unit PR is integrated in the light module ML or is independent of said light module ML. Of course, the description of the invention is not limited to the embodiments described above. Thus, in another non-limiting embodiment, a type B goniophtometer may also be used. Thus, in another non-limiting embodiment, the processing unit PR is offset relative to the lighting device DISP. Thus, the step of calculating the observation position Pos02 in the image reference RI can be performed before or at the same time as the calculation of the luminance position PosL2. Thus, the motor vehicle V comprises one or more DISP lighting devices adapted to implement the MTH projection method described. Thus, the invention described has the following advantages in particular: it makes it possible to project an image comprising at least one graphic symbol which makes it possible to improve the comfort and / or the safety of an observer who is inside or on outside the motor vehicle; it makes it possible to project an image that is visible and comprehensible to a particular observer because said projection depends on the position of said observer; The same projection method is thus applied to project an image comprehensible to the driver or to project an image comprehensible to the pedestrian or even to a driver of a follower vehicle for example, it makes it possible to deform the image to project Ip so that it is understandable by a particular observer. Anamorphosis of an image is thus created, said anamorphosis being dependent on the observation position of the observer O; the observation position of the observer in the image reference frame is a function of the position and the rotation of said image to be projected. Thanks to the rotation that depends in particular on a site angle, when the latter is adjusted in a particular way, the observer has the impression of seeing a 3D image; it enables the information to be projected to be integrated into the lighting beam Fx of the light module ML of the motor vehicle. It is not necessary to have an additional dedicated light source; - Thus, unlike a state of the prior art which displays an image directly on the window of the rear light of the motor vehicle and which may appear too small at a distance, the invention allows an outside observer who is at a certain distance from said vehicle to see the image well since the latter is projected according to the very position of the observer and on a projection surface that is not the window of a fire of the motor vehicle. The dimensions of the image to be projected Ip are no longer limited to the small projection surface such as the fire pane; it makes it possible to propose a solution that can be used for a recipient of the information that can only see the front or the sides of the motor vehicle, for example, unlike a solution that displays an image on the taillights of the motor vehicle ; it makes it possible to propose a solution other than an image display (s) on the rear lights of the motor vehicle; it makes it possible to propose a solution other than a projection of image (s) solely dedicated to the driver of the motor vehicle.
权利要求:
Claims (28) [1" id="c-fr-0001] 1. Projection method (MTH) for a motor vehicle (V) of at least one image (Ip) on a projection surface (S) by means of a light module (ML) adapted to project a light beam (Fx) , wherein said projection method (MTH) comprises the steps of: - detecting an observation position (PosOI) of an observer (O) in a light module (RP) repository; calculating the observation position (Pos02) of the observer (O) in an image reference frame (RI); projecting said image (Ip) on said projection surface (S) according to said observation position (Pos02) of the observer (O) in said image reference (RI), said image (Ip) being integrated in said light beam (Fx) of the light module (ML). [2" id="c-fr-0002] 2. projection method (MTH) according to claim 1, wherein the projection of said image (Ip) comprises the substeps of: - from a light intensity map (CLUX) of the light beam (Fx) of the light module (ML) comprising a plurality of intensity indicators (pf), calculating a luminance map (CLUM) on the projection surface (S) resulting in luminance points (pl); - calculate the position (PosL2) of each luminance point (pl) in the image reference (RI); from its position (PosL2) and from the observation position (Pos02) of the observer (O) in said image reference (RI), define the coordinates (ply, plz) of the projection (plr) of each luminance point (pl) on the image plane (P1) of said image to be projected (Ip); if said projection (plr) belongs to said image to be projected (Ip), to define coordinates (lig, col) of the corresponding pixel (Pix); for each projection (plr) of a luminance point (pl) belonging to said image to be projected (Ip), correcting the intensity value (Vi) of the corresponding intensity indicator (pf) as a function of the color (Co) of the corresponding pixel (Pix). [3" id="c-fr-0003] The projection method (MTH) according to claim 1 or claim 2, wherein the calculation of the luminance map (CLUM) on the projection surface (S) comprises the steps of: - performing a first calculation of the position (POSpf) said intensity indicia (pf) on the projection surface (S) resulting in impact points (pi); - Perform a second calculation of an illumination map (CECL) of said impact points (pi); - Perform a third calculation of the luminance map (CLUM) of said impact points (pi) from the illumination mapping (CECL) resulting in said luminance points (pl). [4" id="c-fr-0004] 4. Projection method (MTH) according to claim 3, wherein said first calculation is based on: the position (POSpj) of the light module (ML); and - the direction (dir1) of said intensity indicators (pf). [5" id="c-fr-0005] 5. projection method (MTH) according to claim 3 or claim 4, wherein said second calculation is based on: - the position (POSpf) calculated intensity indicators (pf); the luminous intensity (Ι (θ, δ)) of said intensity indicators (pf); and the distance (distl) between the light module (ML) and the said impact points (pi). [6" id="c-fr-0006] 6. projection method (MTH) according to any one of the preceding claims, wherein the image to be projected is calculated according to the properties of the projection surface. [7" id="c-fr-0007] 7. projection method (MTH) according to the preceding claim, taken in combination with any one of the preceding claims 3 to 5, wherein said third calculation is based on: - the illumination (E) of said points of impact ( pi); a position vector (Roeil) between the position of an impact point (pi) of the illumination mapping (CECL) and the observation position (PosOI) of the observer (O); and a light scattering function (d). [8" id="c-fr-0008] Projection method (MTH) according to any one of the preceding claims 2 to 7, wherein the calculation of the observation position (Pos02) of an observer (O) and the position of a luminance point. (pl) in the image repository (RI) is based on at least one transformation matrix (M) of the light module repository (RP) to said image reference (RI) which takes into account at least one of the following parameters: - the position (Poslp) of the image to be projected (Ip) in the luminous module (RP) repository; the rotation (Rotlp) of the image to be projected (Ip). [9" id="c-fr-0009] 9. projection method (MTH) according to any preceding claim 2 to 8 wherein the definition of the coordinates (ply, plz) of a projection (plr) of a luminance point (pl) comprises the sub- steps of: - calculating the point of intersection (Int) between: - the line passing through the observation position (Pos02) in said image reference (RI) of the observer (O) and by the position (PosL2) in said image reference (RI) of said luminance point (pl); and - the image plane (P1) of the image to be projected (Ip); determining the coordinates (ply, plz) of said intersection point (Int) from dimensions (L1, H1) of said image to be projected (Ip). [10" id="c-fr-0010] 10. projection method (MTH) according to any one of the preceding claims, wherein the projection surface (S) is considered a Lambertian diffuser. [11" id="c-fr-0011] The projection method (MTH) according to any one of the preceding preceding claims, wherein the projection surface (S) is a floor or a wall. [12" id="c-fr-0012] 12. projection method (MTH) according to any one of the preceding claims, wherein the observer (O) is outside the motor vehicle (V) or is located in said motor vehicle (V). [13" id="c-fr-0013] 13. Projection method (MTH) according to any one of the preceding claims, wherein the image appears distorted in the light module (RP) and not deformed repository in the repository (RI). [14" id="c-fr-0014] 14. A projection method (MTH) according to any preceding claim, wherein the calculation of the observation position (Pos02) of the observer (O) in the image reference (RI) is a function of the position ( Poslp) and the rotation (Rotlp) of the image to be projected (Ip) in the light module reference frame (RP), said rotation (Rotlp) being a function of a site angle (ε) greater than -90 ° and lower or equal to 0 °. [15" id="c-fr-0015] 15. projection method (MTH) according to any one of claims 2 to 14, wherein the calculation of the position (PosL2) of each luminance point (pl) in the image reference (RI) is a function of the position (Poslp) and the rotation (Rotlp) of the image to be projected (Ip) in the light module (RP) reference, said rotation (Rotlp) being a function of a site angle (ε) greater than -90 ° and lower or equal to 0 °. [16" id="c-fr-0016] 16. projection method (MTH) according to claim 14 or claim 15, wherein the site angle (ε) is equal to 0. [17" id="c-fr-0017] 17. projection method (MTH) according to claim 14 or claim 15, wherein the site angle (ε) is substantially equal to minus 35 °. [18" id="c-fr-0018] 18. projection method (MTH) according to any one of the preceding claims, wherein the projection (plr) of said image to be projected (Ip) is of central type. [19" id="c-fr-0019] The projection method (MTH) according to any one of the preceding claims 2 to 18, wherein the correction of the intensity value (Vi) of the intensity indicator (pf) is performed as follows: Vi = a.ViO * Co / 255, with Vi the corrected intensity value, ViO the initial intensity value of the intensity indicator (pf) of the light module (ML), Co the corresponding pixel color (Pix) , σ a maximum overcurrent factor. [20" id="c-fr-0020] The projection method (MTH) according to any one of the preceding claims 2 to 18, wherein the correction of the intensity value (Vi) of the intensity indicator (pf) is performed as follows: Vi = (p.Co, with Vi the corrected intensity value, φ a luminance coefficient, Co the color of the corresponding pixel (Pix). [21" id="c-fr-0021] 21. projection method (MTH) according to any one of the preceding claims, wherein the observation position (PosOI) of an observer (O) is calculated by means of a camera. [22" id="c-fr-0022] 22. projection method (MTH) according to any one of the preceding claims, wherein the light intensity map (CLUX) is stored in memory. [23" id="c-fr-0023] 23. A projection method (MTH) according to any preceding claim 2 to 22, wherein the projection of said image (Ip) on the projection surface (S) further comprises the sub-step of projecting by means of said luminous module (ML) on the projection surface (S) the light beam (Fx) with intensity values (Vi) corrected for the intensity indicators (pf). [24" id="c-fr-0024] 24. Lighting device (DISP) for a motor vehicle (V) comprising a processing unit (PR) and a light module (ML) adapted to project a light beam (Fx), in which: - said processing unit (PR) ) is adapted to: - detect an observation position (PosOI) of an observer (O) in a light module (RP) repository; calculating the observation position (Pos02) of the observer (O) in an image reference frame (RI); said lighting device (DISP) is adapted to project said image (Ip) onto said projection surface (S) as a function of said observation position (Pos02) of the observer (O) in the image reference (RI) ), said image (Ip) being integrated in said light beam (Fx) of the light module (ML). [25" id="c-fr-0025] 25. lighting device (DISP) according to the preceding claim, wherein for the projection of said image (Ip) on the projection surface (S), said processing unit (PR) is further adapted for: - from of a light intensity map (CLUX) of the light beam (Fx) of the light module (ML) comprising a plurality of intensity indicators (pf), calculating a luminance map (CLUM) on the projection surface ( S) resulting in luminance points (pl); - calculate the position (PosL2) of each luminance point (pl) in the image reference (RI); from its position (PosL2) and from the observation position (Pos02) of the observer (O) in said image reference (RI), define the coordinates (ply, plz) of the projection (plr) of each luminance point (pl) on the image plane (P1) of said image to be projected (Ip); if said projection (plr) belongs to said image to be projected (Ip), to define coordinates (lig, col) of the corresponding pixel (Pix); for each projection (plr) of a luminance point (pl) belonging to said image to be projected (Ip), correcting the intensity value (Vi) of the corresponding intensity indicator (pf) as a function of the color (Co) of the corresponding pixel (Pix). [26" id="c-fr-0026] 26. Lighting device (DISP) according to claim 24 or claim 2325 wherein for the projection of said image (Ip) on the projection surface (S), the light module (ML) is adapted to project on the surface projection (S) the light beam (Fx) with intensity values (Vi) corrected intensity indicators (Pf). [27" id="c-fr-0027] 27. Lighting device according to any one of claims 24 to 26, wherein the lighting device is a projector or a rear light. [28" id="c-fr-0028] 28. Motor vehicle comprising a lighting device according to any one of claims 24 to 27.
类似技术:
公开号 | 公开日 | 专利标题 EP3300942B1|2021-11-17|Method for projecting images by a projection system of a motor vehicle, and associated projection system FR3041110A1|2017-03-17|PROJECTION METHOD FOR A MOTOR VEHICLE OF AN IMAGE ON A PROJECTION SURFACE US11082678B2|2021-08-03|Vehicular vision system with customized display US10179543B2|2019-01-15|Multi-camera dynamic top view vision system EP3306592B1|2019-03-06|Method for projecting an image by a projection system of a motor vehicle, and associated projection system EP2133237B1|2015-03-04|Process of displaying a parking aid FR2899332A1|2007-10-05|VISIBILITY FIELD MEASUREMENT DEVICE FOR VEHICLE AND DRIVING ASSISTANCE SYSTEM FOR VEHICLE CN104057882A|2014-09-24|System For Viewing A Curb In A Front Region On The Basis Of Two Cameras FR2965765A1|2012-04-13|METHOD AND DEVICE FOR FORMING AN IMAGE OF AN OBJECT IN THE ENVIRONMENT OF A VEHICLE FR2902381A1|2007-12-21|Motor vehicle driving assisting method, involves merging image captured by image formation device of night vision system and synthesis image, and displaying merged image on internal display of night vision system FR2938660A1|2010-05-21|CONTROL METHOD AND APPARATUS FOR DETERMINING THE PHOTOMETRIC PARAMETERS OF THE PROJECTION OF A SIGN FR3056773A1|2018-03-30|DEVICE FOR AIDING THE DRIVING OF A MOTOR VEHICLE FR2942064A1|2010-08-13|Method for alerting driver of motor vehicle e.g. bus, during event occurred on back side of vehicle, involves displaying pictogram in form of contour of visible part of reflection of vehicle on glass of rear-view mirrors EP2801075B1|2018-09-19|Image processing method for an on-board camera installed on a vehicle and corresponding processing device EP2432660A1|2012-03-28|Method and device for extending a visibility area FR2926908A1|2009-07-31|Motor vehicle i.e. car, driving assisting method, involves allocating quality score to each of pixels in each of corrected images, and selecting pixel having high quality in order to elaborate top view of environment of motor vehicle EP2180426A1|2010-04-28|Method for detecting a target object for an automobile FR3060775A1|2018-06-22|METHOD FOR DETERMINING A DISPLAY AREA OF AN ELEMENT IN A DISPLAY DEVICE FR3056533A1|2018-03-30|DEVICE FOR AIDING THE ORIENTATION OF WHEELS OF A VEHICLE FR3088754A1|2020-05-22|METHOD OF CREATING A VIEW FROM AN IMAGE CAPTURED BY A WIDE ANGLE CAMERA FR3083623A1|2020-01-10|PANORAMIC CAMERA REVIEW DEVICE WITH HEADSET DISPLAY WO2016166138A1|2016-10-20|Method for managing and displaying georeferenced graphic symbols and associated display device
同族专利:
公开号 | 公开日 JP2018530467A|2018-10-18| CN108136954A|2018-06-08| FR3041110B1|2018-03-16| JP6936791B2|2021-09-22| US20180251064A1|2018-09-06| US10173577B2|2019-01-08| EP3350018A1|2018-07-25| CN108136954B|2021-06-11| WO2017046105A1|2017-03-23|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 WO2000035200A1|1998-12-07|2000-06-15|Universal City Studios, Inc.|Image correction method to compensate for point of view image distortion| WO2002093483A1|2001-05-15|2002-11-21|Koninklijke Philips Electronics N.V.|Method and apparatus for adjusting an image to compensate for an offset position of an observer| FR2919934A1|2007-08-09|2009-02-13|Univ D Auvergne Clermont 1 Eta|PANORAMIC PROJECTION DEVICE AND METHOD USED THEREIN|WO2018189232A1|2017-04-13|2018-10-18|Valeo Vision|Luminous lane-change signalling device for a motor vehicle|JP3381351B2|1993-12-24|2003-02-24|日産自動車株式会社|Ambient situation display device for vehicles| JPH1055446A|1996-08-09|1998-02-24|Omron Corp|Object recognizing device| US7289655B2|2001-08-29|2007-10-30|Seiko Epson Corporation|Device for inspecting illumination optical device and method for inspecting illumination optical device| JP2004177920A|2002-08-09|2004-06-24|Olympus Corp|Projection observation device| US6709141B1|2003-01-24|2004-03-23|Sebastian Sisti|Automotive warning light| JP3969363B2|2003-07-30|2007-09-05|カシオ計算機株式会社|Projector and projection image correction method for projector| TWI229235B|2003-12-01|2005-03-11|K Laser Technology Inc|Holographic stereogram device| JP4720650B2|2006-06-30|2011-07-13|アイシン・エィ・ダブリュ株式会社|Road surface projection apparatus and road surface projection method| JP5262057B2|2006-11-17|2013-08-14|株式会社豊田中央研究所|Irradiation device| JP2009105679A|2007-10-23|2009-05-14|Canon Inc|Image processing apparatus and method of controlling the same| JP5056445B2|2008-02-04|2012-10-24|株式会社豊田中央研究所|Vehicle lighting device| US20110187844A1|2008-09-12|2011-08-04|Kabushiki Kaisha Toshiba|Image irradiation system and image irradiation method| US9230419B2|2010-07-27|2016-01-05|Rite-Hite Holding Corporation|Methods and apparatus to detect and warn proximate entities of interest| DE102010034853A1|2010-08-18|2012-02-23|Gm Global Technology Operations Llc |Motor vehicle with digital projectors| KR101312630B1|2011-07-08|2013-10-04|에스엘 주식회사|Automotive lamp and cotrolling method for the same| KR101338075B1|2011-12-14|2013-12-06|현대자동차주식회사|Method for warning pedestrian using laser beam| EP2667325A1|2012-05-22|2013-11-27|Connaught Electronics Ltd.|Method for determining an analysis region in a camera image, camera system and motor vehicle with a camera system| US8733939B2|2012-07-26|2014-05-27|Cloudcar, Inc.|Vehicle content projection| DE102012016255A1|2012-08-16|2014-02-20|GM Global Technology Operations, LLC |A method of warning a subsequent motor vehicle| JP5696701B2|2012-08-31|2015-04-08|株式会社デンソー|Anti-pedestrian notification device| CN103019643B|2012-12-30|2015-12-23|中国海洋大学|A kind of large screen projection automatic calibration of plug and play and splicing display method| WO2015032795A2|2013-09-03|2015-03-12|Jaguar Land Rover Limited|System for imaging| JP6214389B2|2013-12-26|2017-10-18|株式会社小糸製作所|Vehicle lighting| US9481287B2|2014-01-21|2016-11-01|Harman International Industries, Inc.|Roadway projection system| CN104103081A|2014-07-14|2014-10-15|西安电子科技大学|Virtual multi-camera target tracking video material generation method| CN104835159A|2015-05-07|2015-08-12|中国科学院长春光学精密机械与物理研究所|Digital image correction method for continuous variable-focal-length optical imaging system| US9616802B1|2015-11-02|2017-04-11|AISIN Technical Center of America, Inc.|Apparatus and method to visually communicate with a vehicle| KR20170058188A|2015-11-18|2017-05-26|엘지전자 주식회사|Driver Assistance Apparatus and Vehicle Having The Same| JP6418182B2|2016-03-07|2018-11-07|トヨタ自動車株式会社|Vehicle lighting device| JP6500814B2|2016-03-07|2019-04-17|トヨタ自動車株式会社|Vehicle lighting system|CN108349429B|2015-10-27|2021-03-23|株式会社小糸制作所|Lighting device for vehicle, vehicle system and vehicle| US10829036B2|2018-04-18|2020-11-10|Maxell, Ltd.|Road sign projector and vehicle lamp| KR20200064182A|2018-11-16|2020-06-08|현대모비스 주식회사|Control system of autonomous vehicle and control method thereof| US11120691B2|2019-12-02|2021-09-14|Denso Corporation|Systems and methods for providing warnings to surrounding vehicles to avoid collisions| CN111267724A|2020-03-31|2020-06-12|东风柳州汽车有限公司|Method and system for protecting and reminding vehicle door opening| US11230223B1|2020-11-30|2022-01-25|Hyundai Mobis Co., Ltd.|Apparatus for emitting road surface information and method thereof|
法律状态:
2016-09-28| PLFP| Fee payment|Year of fee payment: 2 | 2017-03-17| PLSC| Publication of the preliminary search report|Effective date: 20170317 | 2017-09-29| PLFP| Fee payment|Year of fee payment: 3 | 2018-09-28| PLFP| Fee payment|Year of fee payment: 4 | 2019-09-30| PLFP| Fee payment|Year of fee payment: 5 | 2020-09-30| PLFP| Fee payment|Year of fee payment: 6 | 2021-09-30| PLFP| Fee payment|Year of fee payment: 7 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 FR1558592A|FR3041110B1|2015-09-14|2015-09-14|PROJECTION METHOD FOR A MOTOR VEHICLE OF AN IMAGE ON A PROJECTION SURFACE| FR1558592|2015-09-14|FR1558592A| FR3041110B1|2015-09-14|2015-09-14|PROJECTION METHOD FOR A MOTOR VEHICLE OF AN IMAGE ON A PROJECTION SURFACE| EP16763861.8A| EP3350018A1|2015-09-14|2016-09-13|Projection method for a motor vehicle, for projecting an image onto a projection surface| PCT/EP2016/071596| WO2017046105A1|2015-09-14|2016-09-13|Projection method for a motor vehicle, for projecting an image onto a projection surface| JP2018513638A| JP6936791B2|2015-09-14|2016-09-13|Automotive projection method for projecting an image onto a projection plane| CN201680062282.5A| CN108136954B|2015-09-14|2016-09-13|Projection method for a motor vehicle for projecting an image onto a projection surface| US15/759,636| US10173577B2|2015-09-14|2016-09-13|Projection method for a motor vehicle, for projecting an image on to a projection surface| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|