![]() METHOD FOR RECOGNIZING FALSE FOOTPRINT BY LIGHTING STRUCTURE
专利摘要:
The invention relates to a method for determining whether or not a papillary impression consists of human living tissue, using a superimposed papillary impression sensor (110), a contact surface (106), an optical sensor matrix (112), and lighting means (111) formed by a plurality of lighting devices (1110) parallel to each other. The method according to the invention comprises the following steps: - illumination of the papillary impression by the illumination means, the lighting devices forming together, on the contact surface, at least one uniform illumination figure along an axis which extends from one side to the other of a detection surface of the matrix optical sensor, and acquisition of an image by the matrix optical sensor, these steps being implemented at least once; in each image, selection of the pixels corresponding to the valleys of the imprint, or selection of the pixels corresponding to the peaks of the imprint; and - from the selected pixels, extracting an optical characteristic defining the response to the at least one illumination, the material constituting the papillary impression, and using this optical characteristic to determine the values of at least two coefficients optical characteristic of the imprint It is thus possible to characterize a footprint everywhere on its surface. 公开号:FR3046277A1 申请号:FR1563180 申请日:2015-12-23 公开日:2017-06-30 发明作者:Jean-Francois Mainguet;Jerome Boutet;Joel Yann Fourre 申请人:Commissariat a lEnergie Atomique CEA;Safran SA;Commissariat a lEnergie Atomique et aux Energies Alternatives CEA; IPC主号:
专利说明:
METHOD FOR RECOGNIZING FALSE PAPILLARY FOOTPRINT BY LIGHTING STRUCTURE DESCRIPTION TECHNICAL FIELD The invention relates to the field of papillary impression sensors. It relates more particularly to a fraud detection method. A papillary impression designates a fingerprint related to the particular folds of the skin, in particular a fingerprint, but also a palmar, plantar, or phalangeal fingerprint. Such a fingerprint forms an effective means of person identification. This means of identification may be supplemented by a fraud detection, in order to detect when a true papillary impression is replaced by an imitation. In other words, it is a matter of recognizing whether a papillary impression is a true imprint, consisting of human living tissue, or a false imprint, which is not made of human living tissue (for example latex, rubber , or gelatin). STATE OF THE PRIOR ART Various methods are known in the prior art for recognizing a false papillary impression, exploiting the specific optical properties of a living human tissue, in particular its spectral response. Patent application WO 2008/050070 describes an example of such a method. A disadvantage of this process is that it offers a characterization only small specific locations on the footprint. These locations may be known by the fraudster. By covering his finger with an imitation of a fingerprint, except in these small locations, he may mislead the fraud detection. An object of the present invention is to provide a method and a device for overcoming this disadvantage of the prior art. STATEMENT OF THE INVENTION This object is achieved with a method for determining whether or not a papillary impression is made of human living tissue, the impression being in direct physical contact with a contact surface of a papillary impression sensor having, superimposed beneath the surface of contact, a matrix optical sensor, and lighting means formed by a plurality of lighting devices parallel to each other. The method according to the invention comprises the following steps: illumination of the papillary impression by the lighting means, the lighting devices forming, on the contact surface, at least one uniform illumination figure along an axis which extends from one side to the other of a detection surface of the matrix optical sensor, and acquisition of an image by the matrix optical sensor, these steps being implemented at least once; in each acquired image, selection of the pixels corresponding to the valleys of the imprint, or selection of the pixels corresponding to the peaks of the imprint; and from the selected pixels, extracting an optical characteristic defining the response to at least one illumination, the material constituting the papillary impression, and using this optical characteristic to determine the values of at least two optical coefficients characteristics of the footprint. This method does not require the emission of an additional light signal, other than that which is useful for imaging the papillary impression. The characterization of the papillary impression implements so-called structured lighting, that is to say corresponding to the lighting of certain lighting devices only. From this light, we can characterize the papillary impression to deduce if it is constituted or not of living human tissue. This characterization does not require the use of multiple wavelengths of illumination. Each photodetector of the matrix optical sensor can contribute to the detection of fraud, which offers an excellent resolution of the detection of fraud without increasing the size of an apparatus necessary for the implementation of the method. The fraud detection thus implemented can therefore characterize the entire surface of the impression in direct physical contact with the contact surface of the matrix optical sensor. The method according to the invention thus offers a consolidated fraud detection, in comparison with the methods of the prior art. Preferably, each illumination figure is uniform along the axis of the width of the detection surface of the matrix optical sensor. Advantageously, the at least two optical coefficients characteristic of the imprint comprise an absorption coefficient μΑ and a reduced diffusion coefficient μ5 '. μΑ is expressed in mm-1 or cm1, and corresponds to an absorbed optical intensity per unit length, in a material. μ5 'is expressed in mm-1 or cm1, and corresponds to a scattered optical intensity per unit length in the material μ5' = (1 - g) * μ5, where g is the anisotropy coefficient. The method according to the invention may furthermore comprise a step of comparison between said values and reference data, to distinguish values associated with a papillary impression made of human living tissue, and values associated with a papillary impression which is not not made of human living tissue. Each illumination figure advantageously extends above the detection surface of the matrix optical sensor, and consists of one or more light strip (s) parallel to pixel lines of the matrix optical sensor. . Preferably, the illumination of the papillary impression can be implemented by means of illumination means arranged above the matrix optical sensor, and such that each lighting device consists of an organic light-emitting diode . The values of the optical coefficients characteristic of the papillary impression are advantageously determined, in particular the absorption coefficient μΑ and the reduced diffusion coefficient μ5 ', by means of a predictive model of the response of a fingerprint to a fingerprint. known illumination function, this model being a function of said characteristic optical coefficients, minimizing a difference between this model and an experimental measurement of the response of the papillary impression to this same illumination, obtained with the aid of the selected pixels. As a variant, the values of the optical coefficients characteristic of the papillary impression, in particular the absorption coefficient μΑ and the reduced diffusion coefficient μ5 ', can be determined using a set of characteristic curves of the response of the papillary impression. an imprint to a known illumination function, each associated with known values of said characteristic optical coefficients, by searching for the most similar curve of a corresponding experimental curve obtained with the aid of the selected pixels. The predictive model, or the set of characteristic curves, can be obtained by a convolutional calculation of an illumination function associated with the at least one illumination figure, with the impulse response of a medium whose values said characteristic optical coefficients are known. Preferably, at each illumination step of the papillary impression, the illumination means together form at least one illumination figure defined by a periodic illumination function along the axis of the length of the detection surface of the matrix optical sensor. The lighting means may together form an illumination figure defined by a spatially periodic lighting function of the slot type. In a variant, at each illumination step, the illumination means together form illumination figures that together define a sinus-type periodic function. The steps of illumination of the papillary impression and acquisition of an image may be carried out at least three times, for the same frequency of the sinus-type illumination function, and for three different phase-shifts thereof. According to another variant, at each illumination step, the illumination means together form an illumination figure defined by a step-type illumination function. According to another variant, at each illumination step, the lighting means together form an illumination figure defined by a thin-line illumination function. A series of images associated with different positions of the illuminated lighting device (s) can be acquired together forming an illumination figure defined by a thin line or step illumination function. Preferably, at each illumination step, the lighting devices are switched on and off to successively form different illumination figures, a scanning frequency from one illumination figure to the next being synchronized with a scanning frequency of the integration of the pixel lines of the matrix optical sensor. Said synchronization can be implemented to achieve an illumination of the papillary impression by a sinus-type illumination function, and the values of said characteristic coefficients of the impression are determined using two images acquired by the matrix optical sensor, and associated with two phase values distinct from said illumination function. As a variant, said synchronization is implemented to acquire images in which each line of pixels is associated with the same distance at a particular point of the illumination figures. The invention also relates to a system for implementing a method according to the invention, comprising: a papillary impression sensor comprising, superimposed, a contact surface for applying the imprint, a matrix optical sensor, and means for lighting, formed by a plurality of lighting devices parallel to each other; control means, configured to turn on and off the lighting devices according to the at least one stage of illumination of the papillary impression; pixel selection means, configured to receive the at least one image acquired by the matrix optical sensor, and extracting the pixels corresponding to the valleys of the cavity, or the pixels corresponding to the peaks of the cavity; and means for determining the values of the characteristic optical coefficients from the selected pixels. BRIEF DESCRIPTION OF THE DRAWINGS The present invention will be better understood on reading the description of exemplary embodiments given purely by way of indication and in no way limiting, with reference to the appended drawings in which: FIG. 1A schematically illustrates a system specially adapted to the implementation of FIG. implementation of a first embodiment of the method according to the invention; Figures IB and IC illustrate schematically two variants of lighting means according to the invention; Figure 1D schematically illustrates the steps of a method according to the invention; FIG. 2 illustrates an example of a papillary impression sensor, especially adapted to the implementation of a method according to the invention; FIGS. 3A to 3C illustrate a pixel selection step according to the invention; FIG. 4 illustrates a first embodiment of the illumination steps of the matrix optical sensor, and acquisition of an image; Fig. 5 illustrates a step of comparing to reference data; FIG. 6 illustrates a second embodiment of the illumination steps of the matrix optical sensor, and acquisition of an image; FIGS. 7A and 7B illustrate a third embodiment of the illumination steps of the matrix optical sensor, and acquisition of an image; FIG. 8 illustrates a fourth embodiment of the steps of illumination and acquisition of an image; FIG. 9 illustrates a fifth embodiment of the steps of illumination and acquisition of an image; and Fig. 10 illustrates a sixth embodiment of the steps of illuminating and acquiring an image. DETAILED PRESENTATION OF PARTICULAR EMBODIMENTS Figures IA to 1D schematically illustrate a method according to the invention and a system 100 specially adapted to the implementation of this method. In order to facilitate understanding of the invention, the system 100 is first described to distinguish a true papillary impression from an imitation. In the following, we will consider, by way of example and without limitation, that it is a fingerprint. The system 100 comprises a fingerprint sensor 110 constituted by: a contact surface 106, on which, in use, the user places his finger 300 so that the skin, in other words the tissues, or the less the skin of the ridges of the papillary impression, being in direct physical contact with said surface 106; a matrix optical sensor 112 formed of a plurality of photo-detectors, for example PiN diodes (for "P-Type Intrinsic N-Type"), organic photodetectors (so-called OPDs), phototransistors, or any other photosensitive element; and - lighting means 111. The matrix optical sensor 112 and the lighting means 111 are superimposed under the contact surface 106. If necessary, the element among the illumination means 111 and the matrix optical sensor 112 situated above the other allows a sufficient quantity of light to pass through in order to carry out the imaging function. This corresponds for example to a transmission coefficient of at least 10% to the central wavelength of emission of the lighting means. The matrix optical sensor comprises photodetectors distributed in rows and columns, for example in a square grid. The extent of the lines defines the width L1 of the detection surface 125 of the matrix optical sensor. The extent of the columns defines the length L2 of the detection surface 125 of the matrix optical sensor. The width L1 is aligned with the axis (OY). The length L2 is aligned with the axis (OX). The largest side of the matrix optical sensor can be indifferently the length or the width. The detection surface is the surface on which the photodetectors extend, parallel to the plane (XOY). The lighting means 111 are configured to emit a light signal 121 in the direction of a fingerprint located on the finger 300 (which is in direct physical contact with the matrix optical sensor, on the contact surface 106). This signal is backscattered by the finger and returns to the fingerprint sensor 110 in the form of a backscattered signal 122, received by the matrix optical sensor 112. The lighting means 111 consist of a plurality of lighting devices 1110. The fingerprint sensor 110 comprises, for example, more than ten lighting devices 1110, preferably several tens. Each lighting device 1110 extends in one piece over more than one third of the width L1 of said detection surface. In addition, they extend together in one or two series of patterns parallel to each other, distributed along the length L2 of the detection surface. The lighting devices 1110 are therefore coplanar, distributed here on a surface of dimensions greater than or equal to those of the detection surface 125 of the matrix optical sensor. Thus, a papillary impression can be illuminated from locations distributed everywhere on this surface. In Fig. 1B, the lighting devices 1110 extend together in a single series of patterns parallel to each other. They each extend substantially over the entire width L1 of the detection surface 125 of the matrix optical sensor, for example at least 90% of this width. In FIG. 1B, they extend even beyond this detection surface, which limits edge effects on the images acquired. In other words, they extend below (or above) said detection surface, protruding from both sides of the latter in a plane (XOY). Here, the lighting devices all have the same extent along the axis (OY). Preferably, they also have the same dimensions according to (OX) and (OZ). They extend in patterns parallel to each other, here in parallel bands between them. Alternatively, each pattern has a trapezoidal shape, or in particular zigzag sinusoidal, or any other shape elongated according to (OY). In all these variants, the lighting devices provide illumination, or illumination, in parallel strips. The patterns formed by the lighting devices extend here parallel in them and parallel to the axis (OY), that is to say parallel to the pixel lines of the matrix optical sensor. In a variant, these patterns may extend parallel to one another and slightly inclined relative to the axis (OY) and to the pixel lines of the matrix optical sensor. This inclination, less than 8 °, in some cases makes it possible to improve the resolution of the images obtained. In the following, we detail examples without inclination. Those skilled in the art will know how to make similar variants with non-zero inclination. The lighting devices are distributed here regularly according to (OX), their ends being aligned on an axis parallel to (OX). In particular, they are regularly distributed along the length L2 of the detection surface 125 of the matrix optical sensor, with a repetition pitch PI equal to the pixel pitch of the matrix optical sensor along the axis (OX), for example 25 μm or 50 pm. As a variant, the repetition step PI of the lighting devices is constant, and distinct from the pixel pitch P2 pixel of the matrix optical sensor according to (OX). In particular, the repetition step PI may be an integer multiple of P2. For example, P2 is of the order of 50 μm, or even 25 μm, and PI is between 10 and 20 times greater, for example of the order of 500 μm, preferably less than one millimeter. Each lighting device can then extend above or below several rows of pixels of the matrix optical sensor. In operation, not all lighting devices are lit simultaneously, and only the pixels of the matrix optical sensor that are not located directly beneath a lit lighting device can be processed. The lighting means 111 are connected to control means 130, configured to turn on and off each of the lighting devices 1110 independently of each other. The control means 130 are configured to form at least one illumination figure, corresponding to a figure formed at a time t by the lighting devices 1110, of which at least one is lit and at least one is off. An illumination figure corresponds to the spatial distribution of the illumination provided at a time t by the illumination means, at the level of the contact surface 106. Here, each illumination figure extends above the sensor matrix optic, on the same width L1 as the detection surface thereof. Each illumination figure is composed of one or more light bands extending the entire width L1. When the lighting devices are sufficiently close to the contact surface, for example less than one millimeter, the distances between two light bands correspond to distances between illuminated lighting devices. An illumination figure may be a binary image in which a high level corresponds to a lighting device illuminated and a low level corresponds to a lighting device extinguished. Alternatively, it may have variations in amplitude, according to a light intensity emitted by each lighting device. Each illumination figure is uniform along (OY), or at least uniform along a slightly inclined axis relative to (OY) (8 ° or less). In other words, each illumination figure is uniform along the axis of elongation of the lighting devices. In the following, we detail various examples of uniform illumination figures according to (OY). This uniformity is achieved particularly easily, thanks to the parallel strip arrangement of the lighting devices. In particular, it is not necessary to make matrix lighting means having, for example, more than 3 lines and more than 3 columns. In other words, the lighting means according to the invention make it possible to offer structured lighting, without requiring bulky connection or control devices. Illumination step is called the illumination of the impression by the illumination means 111, according to a single illumination figure, or in a succession of several different illumination figures from each other. For each illumination step, an image is acquired using the matrix optical sensor 112. The illumination figure, or the succession of several illumination figures associated with an acquired image, is defined by an illumination function. At each illumination step, the lighting devices are not all lit simultaneously during the acquisition of the same image. As detailed in the following, some lighting devices remain off, and others on, throughout the duration of acquisition of the image. As a variant, the lighting devices are in the on or off state, depending on the time considered during the acquisition period of the image. The fingerprint has a hollow pattern, defining on it crests and valleys. Pixel selection means 140 receives an image acquired by the matrix optical sensor. They are configured to select, in this image, pixels corresponding to the valleys of the footprint, or pixels corresponding to the peaks of the footprint. This step will not be described here longer, because it is known to those skilled in the art. The pixels corresponding to the peaks of the imprint can be called coupled pixels. The pixels corresponding to the valleys of the imprint may be called uncoupled pixels. In the following, one or the other series of selected pixels is used, for example the pixels corresponding to the peaks of the imprint. Depending on the characteristics of the fabrics, for example dry or fat, one or the other series of selected pixels may be preferred. The selected pixels are received by computing means 150, arranged to extract one or more optical characteristic (s), typical (s) of the response of the imprint imaged to the illumination provided by the means of Illuminance 111. This optical characteristic preferably corresponds to a response of the material constituting the imprint. This answer can be considered in the Fourier space. This is for example an optical transfer function, in particular a spreading function of a line, a spreading function of a step, a contrast transfer function, etc. It can also be a modulation transfer function. As a variant, this response is considered in real space, and it is for example simply a profile of intensity as a function of a distance to a light band of the illumination figure. The calculation means 150 are also configured to use the optical characteristic thus extracted, in order to determine the respective values of at least two optical coefficients characteristic of the constituent material of the imaged fingerprint. These optical coefficients are advantageously an absorption coefficient μΑ and a reduced diffusion coefficient μ $, as defined in the disclosure of the invention. The determination of the values of these coefficients uses a model 151. It is in particular a predictive model of the response of the fingerprint to the illumination provided by the lighting means 111. This model depends on the values of the optical coefficients μΑ and μ5. is a predictive model of the response of a material of coefficients μΑ and μ $ known. The calculation means 150 are connected to means 160 for comparing said values of the optical coefficients characteristic of the fingerprint, and of the reference data 161, in order to distinguish values associated with a fingerprint made up of living human tissues and the associated values. to a fingerprint made of materials that are neither living nor human. As a variant, the model 151, used to calculate the values of μΑ and μ $, is a set of experimental or theoretical responses of the type of the extracted optical characteristic, each response being associated directly with known values of the optical coefficients μΑ and μ5 '. and information on the material having these values of the optical coefficients μΑ and μ5 '. In particular, each response is associated with the information that said material is, or is not, living human tissue. In this case, the last comparison step is not always necessary. Figure 1D schematically illustrates the steps of a method according to the invention. The impression is illuminated by means of illumination means 111 by a single or a succession of illumination figure (s) (step 171i). During this illumination step, an image is acquired using the matrix optical sensor 112 (step 172i). These two steps are repeated at least once, for example twice, to form a step 170 for imprinting the imprint and acquiring images. The several acquisitions are made during a very short period of time, during which the finger is immobile or almost (for example a maximum displacement of the finger less than 50 micrometers). If necessary, it can be identified that there has been no movement, and if necessary reject a movement zone, or even all the raw images acquired from a predetermined time. At step 181, the pixels associated with the peaks of the imprint are selected on each acquired image. In a variant, the pixels associated with the valleys of the imprint are selected on each acquired image. This selection is advantageously carried out using an image in which the entire imprint is sufficiently illuminated. For example, an image is acquired when all lighting devices are turned on. This image is acquired during the same very short time interval as described above. In this image, we can easily identify all the pixels associated with the peaks of the footprint, and all those associated with the valleys of the footprint. The location of these pixels is the same on all the other images acquired by the sensor in step 170. It is then easy to select the pixels associated with the peaks or valleys of the impression, on the other images acquired by the sensor, even where the footprint is poorly lit due to structured lighting. Instead of acquiring an image when all the lighting devices are turned on, such an image can be calculated by combining the several images acquired for the different combinations of at least one lighting device that is on and at least one off. In the next step 182, the optical characteristic defining the response of the fingerprint to the lightings implemented in step 170 is extracted, and this optical characteristic is used to determine the values of the characteristic optical coefficients. In step 183, the set of values of the characteristic optical coefficients is compared with the reference data, to determine whether the imaged fingerprint is a true imprint or not. In other words, a classification is made, based for example on a statistical approach of the "scoring" type, and on prior learning on a database consisting of authentic human tissues and materials typically used to make fraudulent fingerprints. FIG. 1C illustrates a variant of the lighting devices of FIG. 1B, in which the lighting devices are divided into two series. A first set of 1110A lighting devices extends on the left side of the fingerprint sensor. It consists of lighting devices which each extend in one piece over approximately half of the detection surface 125 of the matrix optical sensor. They are distributed along the length L2 of the detection surface 125. They extend in particular above said detection surface, from the middle of the latter in the direction of the width L1, to the edge of this detection surface and even beyond. A second set of 1110B lighting devices extends on the right side of the fingerprint sensor. The second series of lighting devices 1110B is symmetrical to the first series 1110A relative to a plane parallel to (XOZ) passing through the middle of the detection surface 125, in the direction of its width L1. Each lighting device 1110A of the first series and its symmetrical 1110B in the second series, are spaced a distance that does not exceed one third of the width L1 of the detection surface 125, preferably only a few micrometers. Here again, strip-structured lighting is very easily realized, being able to position the control units of each illumination device at the edge of the detection surface 125. FIG. 2 illustrates an advantageous embodiment of a fingerprint sensor 210, adapted to the implementation of a method according to the invention. The lighting means 211 are formed by organic light-emitting diodes called OLED each forming a lighting device 2110 according to the invention. The OLEDs are arranged above the optical array sensor 212. They have, for example, strip shapes, in particular strips parallel to each other and parallel to the pixel lines of the matrix optical sensor. The OLEDs, and the photodetectors 2121 of the array optical sensor 212, are formed on a single substrate 201. In the example shown, the OLEDs are formed on a planarization layer 202 covering the photodetectors 2121. Each OLED 2110 is defined by the intersection between a lower electrode 2111, a stack 2112 of at least one layer of organic semiconductor material, and an upper electrode 2113. In the example shown, the lower electrodes are each specific to an OLED, while a single stack of organic semiconductor material and a single upper electrode extend above all photo-detectors 2121. Many variants can be implemented, for example with a stack of organic semiconductor material specific to each OLED. In each embodiment of a fingerprint sensor according to the invention, the distance between the contact surface and the matrix optical sensor is less than 25 μm. Here, the matrix optical sensor being disposed under the OLEDs, this implies a small thickness of the OLEDs, advantageously less than 25 μm and even 20 μm. This embodiment is particularly advantageous, especially since it offers a wide variety of dimensions and shapes of lighting devices. For example, trapezoid-shaped or zigzag-shaped OLEDs, which extend along (OY) and each provide strip illumination, may be realized. This embodiment makes it possible to produce lighting devices of very small dimension along the axis (OX), for example so as to extend only between two rows of photo-detectors of the matrix optical sensor, without covering the photosensitive zones. of the last. For example, the (OX) dimension of each OLED is less than 50 μm, or 25 μm, or even less than 10 μm. It also makes it possible to promote the amount of light arriving in the tissues, and to minimize the diffusion of light before reaching the papillary impression. This embodiment is also particularly compact, because it does not require the presence of OLED driving transistors located under the detection surface. These can simply be at the edge of the detection surface. Finally, there is little light that can go directly from the OLEDs to the pixels of the matrix optical sensor. And even if there were, it would be enough not to exploit the pixels located directly under the OLED lit. FIGS. 3A to 3C schematically illustrate the selection of pixels according to the invention. Figure 3A schematically illustrates an image acquired by the matrix optical sensor. In FIG. 3B, the pixels associated with the peaks of the imprint are represented in black. In FIG. 3C, the pixels associated with the valleys of the imprint are represented in black. In the following, by way of example, we consider that we are interested in the pixels associated with the valleys of the footprint. However, it will also be possible to implement the method according to the invention both from the pixels associated with the valleys of the impression, and from the pixels associated with the peaks of the imprint, all of this information being able to serve to improve the discriminating power of the classifier to be used in the comparison step. Considering separately the pixels associated with the peaks and those associated with the valleys makes it possible not to be disturbed by the pattern of the lines of the impression, the contrast appearing because of the difference in coupling between the tissues in physical contact with the surface. contact 106 (coupled to the fingerprint sensor) and the tissue without physical contact with the contact surface 106 (not coupled to the fingerprint sensor). If necessary, we can interpolate the missing pixels. It is also possible to perform filtering to improve the robustness with regard to possible classification errors between pixels of the valleys and pixels of the peaks. The filtering is for example an average, or a median, on a segment parallel to the lighting devices, and of length between 1 mm and 3 mm. We will now describe in detail the illumination of the impression, and the determination of the values of the optical coefficients, here μΑ and μ5 '. According to a first embodiment of the method according to the invention, the lighting means provide lighting associated with a spatially periodic illumination function according to (OX). In particular, they form on the contact surface defined above, an illumination figure, defined by a periodic illumination function according to (OX). Figure 4 illustrates the lighting means, consisting alternately of three illuminated lighting devices and three lighting devices extinguished. The periodic illumination function is therefore a slot function, of frequency. The illumination figure is considered at the contact surface where the finger is positioned. If necessary, the illumination provided by the lighting means compensates for inhomogeneities that will occur because of the distance between the lighting means and this contact surface. This compensation is not always necessary, in particular when the distance between the lighting means and the contact surface is small, for example less than 700 μm. The method may comprise an initial step of partitioning each acquired image, according to (OX) and / or according to (OY), to form several regions of interest. In each image, the regions of interest are defined by the same coordinates. Each region of interest includes at least one full period of the periodic illumination function, preferably several periods (to have redundancy on the region of interest). Each region of interest for example has a surface area of between 5 and 20 mm 2, on which it is assumed that the material constituting the impression is homogeneous. In the following, one places oneself within such a region of interest and after selection of the pixels associated for example with the valleys of the imprint. If the light bands forming the illumination figure are oriented along a corresponding (or near) axis of (OY), it is possible to implement a preliminary step of smoothing an image, by averaging, in each of the regions of interest , the rows of pixels selected along the axis (OY). This smoothing can optionally be completed by smoothing along the axis (OX), by a usual method (median and / or medium filter). However, we will always want to filter less (OX) than following (OY). According to the invention, the convolution of a point spreading function (PSF) associated with predetermined coefficients μ ^, μ5 'is calculated with the illumination function on the contact surface of the impression sensor. This convolution defines a diffuse reflectance, called predicted reflectance, associated with a set of values of the coefficients μ ^, μ5 '. This reflectance is predicted by a predictive model, which depends on the values of the coefficients μΑ and μ5 '. The illumination function corresponds to the illumination figure, or the succession of illumination figures, formed by the lighting devices on the contact surface during the acquisition of an image. The illumination function is therefore uniform along (OY), or along an axis slightly inclined with respect to (OY), which simplifies the convolution calculation. Then, using the pixels selected on the image acquired by the matrix optical sensor, a diffuse, so-called experimental, reflectance is measured. This is an optical characteristic defining the response of the material constituting the fingerprint to a lighting defined by said illumination function. Values of the μ ^, μ5 'coefficients associated with the fingerprint can be determined by minimizing a difference between the predicted reflectance and the experimental reflectance. For example, the average quadratic difference between these two quantities is minimized. In other words, an illumination function at the contact surface defined by E x (y f) and a matrix optical sensor with rows and columns, the experimental reflectance Rex v obtained from the image is compared. acquired, the result of the convolution of said spreading function of the predetermined point with the illumination function used. In particular, we search the coefficients μΑ, μ5 'which minimize a quadratic difference Xo2 between these two quantities: (1) With: (2) In other words, a mean square deviation between the measured intensities and the theoretical intensities is minimized. Xo2 is an estimator, corresponding to a quadratic sum of differences between experimental reflectances and predicted reflectances. Here we consider average reflectances each associated with a row of pixels. The convolution is advantageously calculated in the Fourier space. The calculation in the Fourier space is particularly advantageous when the illumination function has symmetries. In practice, E x (y) is the image of the illumination function on the sensor. It can be obtained for example, by imaging a pattern on the sensor with the lighting pattern considered. The Rexp experimental diffuse reflectance is deduced from the image after correction with respect to the IREF image of a reference standard whose optical properties μa.ref '^ s.ref' are known: (3) This calculation can be implemented independently in several regions of interest as defined above. In the example illustrated in FIG. 4, the illumination function is a slot function of frequency /. The experimental and predicted reflectances are then contrast transfer functions at the frequency /, denoted CTFexp (f), respectively CTFpred (f). The experimental contrast transfer function at the frequency / is an optical characteristic defining the response of the material constituting the fingerprint, to spatially periodic lighting in slots at the spatial frequency /. The experimental contrast transfer function at frequency / can be calculated using the following formula: (4) Max (f) and Min (f) correspond respectively to the minimum intensity and the maximum intensity of the pixels of the region of interest, if necessary after correction of the noise and the offset provided by the fingerprint sensor. digital. If the region of interest has several lighting periods, these values will be consolidated by researching the local maxima and minima of each period of illumination. The experimental contrast transfer function is determined at several frequencies, preferably two frequencies, since it is desired to determine the values of two coefficients μΑ and μ5 '. Preferably, the zero frequency (uniform illumination), and a non-zero frequency fx are chosen. Null frequency is advantageous because the image under uniform illumination is generally acquired for the needs of the fingerprint sensor, and pre-recorded. Otherwise, the zero-frequency contrast transfer function can be approximated by the average value on the image. The set of experimental values of said contrast transfer function is then compared with a predictive model of this contrast transfer function. This model depends on the values of the coefficients μΑ and μ5 '. For this, we define the parameter, or estimator, Xi2 corresponding to the weighted quadratic sum of difference between the experimental contrast transfer function, and the contrast transfer function predicted by the model. A weighted quadratic sum is made to consolidate the results on all the frequencies used: (5) with: a weighting factor corresponding to the importance and / or degree of reliability accorded to frequency f. CTFpred designating the theoretical contrast transfer function, predicted by the model for each pair of values of μΑ and μ5 'possible at the frequency f We then determine the pair (μΑ, μ5 ') minimizing the estimator Χ12 (μΑ, μ5'), using a standard minimization method. We will ensure the absence of local minima. This method has the advantage of being independent of the lighting power, but may be influenced by the presence of a possible offset (noise due to ambient lighting, in particular). In most cases, it can be dispensed with by subtracting the images with a black image (image acquired when all the lighting devices are off). Alternatively, we can perform the comparison on the ratio between the contrast transfer function at the frequency f and the contrast transfer function at a different frequency fN and far from the f (often, fN will be chosen small in front of the f). The latter method has the advantage of avoiding both the influence of a possible offset and long-term fluctuations in the lighting power. However, it requires an additional frequency (thus three images). If necessary, the spreading function of the point associated with the fingerprint sensor itself can be taken into account, for example by dividing the contrast transfer functions measured by the known contrast transfer function of the sensor of the fingerprint sensor. fingerprints. The predictive model used is based for example on an analytical approach, or on a stochastic Monte Carlo approach. We do not detail here these models, which the skilled person will find, see for example the article of the Journal of Biomedical Optics, 024012, March / April 2009, Vol. 14 (2), "Quantitation and mapping of tissue optical properties using modulated imaging". Such a model makes it possible to simulate the response of a sample of coefficients μΑ and μ $ given at known illumination conditions. It takes into account the conditions of acquisition of the images, for example the optical index of the medium attached to the footprint (which differs according to whether we consider the peaks or valleys). Then, the values of μΑ and μ5 'are compared with reference data as illustrated in FIG. 5, and obtained for example by measuring the values of μΑ and μ5' on known samples. In FIG. 5, the value μΑ is represented on the abscissa and the value μ5 'on the ordinate. The small triangles correspond to measurements of μΑ and μ5 'on a latex imitation of a fingerprint. In the same way, the small squares correspond to gelatin, the crosses to wood glue, the x to a printed transparency, the X to paper. Only the diamonds correspond to real fingers. These measurements make it possible, for example, to define a boundary, or critical threshold 561, encompassing a maximum of measurements corresponding to real fingers and excluding a maximum of measurements corresponding to false fingers. In a variant, a classification function, for example of the ellipse type, is defined relating to an average difference between the values of the characteristic coefficients measured on the fingerprint studied and those measured on real fingers. We obtain a coefficient relative to a probability that the studied fingerprint corresponds to a real finger. Thanks to the partition into several regions of interest, one can determine the values of μΑ and jUS 'separately in each of these regions of interest, and thus analyze the footprint in several locations distributed over its entire surface. As a variant, the several local values of μΑ and μ5 'are combined before making a comparison with reference data, the fingerprint then being analyzed in a global manner. According to a variant of this embodiment, the periodic illumination function is a sinusoidal function, of frequency /. Such an illumination function can be realized by acting on a supply current of each lighting device, so as to modulate the luminous intensity emitted by each of them. The result of the convolution of the spreading function of the point with the illumination function is then a modulation transfer function, at the frequency f. The experimental and predicted reflectances are then modulation transfer functions at the frequency /, denoted MTFexp (f), respectivelyMTFpred (f). The experimental modulation transfer function at frequency / can be calculated as the contrast transfer function: (6) Alternatively, the MTFexp can also be estimated by successively shifting the sines by 120 ° three times, and by taking an image each time (denoted Ilr I2 and / 3), for each frequency / considered. In this case, the MTFexp is deduced from the relation: (7) MAC (f) and Mdc (/) can be obtained using the equations below: (8) (9) Next, the predicted and experimental modulation transfer functions are compared, as detailed above with respect to the contrast transfer functions, and using an X22 estimator defined by: (10) FIG. 6 very schematically illustrates illumination figures associated with the same sinus-type illumination function, of period f, for a phase shift of 0 ° (acquisition of the image li), for a phase shift of 120 ° (acquisition of the image b), and for a phase shift of 240 ° (acquisition of the image I3). Two frequencies are preferably considered, of which the frequency is zero. We see that to limit the number of images, we can consider MDC as a zero frequency modulation (this is also the value of an image under uniform lighting). We can then use only three images to calculate MAC (f) and MDC. The calculation taught in equation (10) may be influenced by the presence of a possible offset (noise due to ambient lighting, in particular). When the offset varies over time, it may be advantageous to compare the equation (10) with the MAC (f) rather than the MTF (f) values, in order to overcome the variations of the offset (but at the expense of lighting power fluctuations). As for a slot function, it is also possible to compare the ratio between the modulation transfer function at the frequency f and the modulation transfer function at a frequency fN different from and f. We now describe a clever method, to achieve illumination by a sinus-type illumination function, by exploiting an integration frequency of the pixels of the matrix optical sensor, and a scanning frequency of the ignition of each lighting device. , these two frequencies being synchronized. The matrix optical sensor is clocked by a single clock. At each clock stroke, one line is read and another line is reset. Here, a line located at a distance N of the line read is reset. Then we have: T = 0, reset of line N, reading of line 0. T = 1, reset of line N + 1, reading of line 1. And so on until T = N, resetting of the line N + N, reading of the line N. We see that the integration time of a line i starts at T = i-N and ends at T = i. It is further assumed that at each moment one or more adjacent lighting devices are on, the other being off. The lit lighting devices can therefore be scanned synchronously with the scanning of the integration of the pixels of the matrix optical sensor. Here, three adjacent lighting devices are at all times in the lit state. In other words, the different figures of illumination formed at each moment differ simply by their phase. To simplify the explanation, it is considered that the lighting devices are distributed according to (OX) in a same pitch as the pixels of the matrix optical sensor. Suppose the lighting device i superimposed on the pixel line i. M denotes the shift, in lines of pixels, between the read pixel line and the nearest lit lighting device. Each lighting device provides strip lighting. If for example N = 5, and that the advance of the line is M = 10, we will have: T = 0, reading of the line 0, reset of the line 5, lighting of the bands 10, 11 and 12. T = l, reading of line 1, reset of line 6, lighting of strips 11, 12 and 13. T = 2, reading of line 2, reset of line 7, lighting of strips 12, 13 and 14. T = 3, reading line 3, reset line 8, lighting strips 13, 14 and 15. T = 4, reading line 4, reset line 9, lighting strips 14, 15 and 16. T = 5, reading line 5, resetting line 10, lighting strips 15, 16 and 17, etc. This is illustrated in FIG. 7A, where the y-axis is the time, and the x-axis is the number of the pixel line or the lighting strip (corresponding to a lighting device). At each time line, the lines of pixels incorporating the light are represented in gray, and the illuminated lighting devices in black. If we consider the lighting devices that have been switched on during the integration of a particular line, we obtain then, for example for the line of pixels 5: a lighting by the lighting devices 10 to 17 with a contribution of 1 for the lighting device 10, 2 for the 11, 3 for the 12 to 15, 2 for the 16 and 1 for the 17. The contribution corresponds to a duration, in number of clock strokes, during which the pixels of line 5 integrate the light of said lighting device. FIG. 7B represents the lighting profile seen by line 5. On the abscissa the light strips, or illuminated lighting devices, and on the ordinate the contribution of each band. The next pixel line will have had a similar lighting profile. There is thus a lighting pattern having a constant offset with the imaging lines. It is thus possible to ensure that the matrix optical sensor receives a lighting associated with a periodic illumination function. We can consider that the function is periodic, after integration on the total integration time of all the rows of pixels. By abuse of language, we can speak of periodic function temporally. The lighting pattern seen by each pixel line may be a homogeneous strip (only one lighting device is lit at each line), or a trapezoidal shape as illustrated (when the number of lighting devices turned on is greater than 1 and different from N (N is the integration time), or a triangle shape when the number of lighting devices turned on is equal to N. If the number of lighting devices is different from the number of lines of pixels, for example half, the same principle applies by dividing by two the scanning frequency of the illuminated lighting devices (thus lighting up the same pattern during reading two lines of pixels). The illumination pattern seen by each pixel line may have a hump shape, as illustrated here, approximating a sinusoidal shape. It is thus possible to make an approximation of a sinusoidal periodic illumination function of frequency. We can then determine μΑ and μ3 'for each line of pixels, or even for each pixel. Each pixel line sees the same known phase of the illumination function. In FIG. 7B, each line of pixels is illuminated by a sinus illumination function having a first known phase shift, here a zero phase shift. We acquire a first image. Then, each line of pixels can be illuminated by a sinus illumination function, phase shifted by π relative to the first phase shift (here a phase shift of τι, corresponding to a hollow shape). We acquire a second image. MDC (f) can then be approximated as the average of the first and second images, and MAC (f) as the difference between the first and second images. We can therefore estimate MTFexpa with only two images. As a variant, the cavity is illuminated by a step-type illumination function, as illustrated in FIG. 8. This illumination function is formed by a series of illuminated neighboring illumination devices, followed by a series of neighboring illumination devices extinguished. A region of the footprint located near the rising edge of the rung is thus characterized. Different regions of the footprint can be characterized by different positions of this falling edge. Several steps can be formed simultaneously, provided that they are sufficiently spaced, for example at least 3 mm from each other at the contact surface, and preferably between 3 and 10 mm, or between 3 and 10 mm. 5 mm. The result of the convolution of the spreading function of the point with the illumination function is then an echelon spread function (or ESF, for Edge Spread Function). For example, ESFexp (x) (experimental step spreading function) is determined from a table grouping, for each line of pixels, the average intensity I (x) of the pixels according to (OX). We then have: (11) with μ the mean of / (x) and σ the standard deviation of / (x). The line spread function is the x-derivative of the step spread function. Note that it is easier to compare line spread functions rather than step spread functions. We can therefore determine μΑ and μ5 'by minimizing the following function: (12) the suri sum corresponding to the sum on the pixel lines of the matrix optical sensor. LSFexp can be calculated numerically, using finite differences: (13) The comparison mentioned in equation (12) can also be done between MTFs (instead of LSFs) by taking the Fourierdes transform LSF. According to another variant, the imprint is illuminated by a thin-line illumination function, as illustrated in FIG. 9, to have a direct approximation of the LSF. This illumination function is preferably formed by a single lighting device illuminated, surrounded by extinct lighting devices. A region of the footprint located near the illuminated lighting device is thus characterized. Different regions of the impression can be characterized by different positions of the lit device. Several lighting devices can be switched on simultaneously, two light bands formed on the contact surface by simultaneously and non-adjacent illuminated lighting devices being at least 3 mm apart, preferably between 3 mm and 10 mm or even between 3 mm and 5 mm. The result of the convolution of the point spreading function with the illumination function is then a Line Spread Function (or LSF). Advantageously, these LSF and ESF measurements can be made with better precision when a small angle (less than 8 °) is introduced between the axis (OY) and the direction of the patterns of the illumination figure. According to another variant, instead of considering differences between an experimental reflectance and a predicted reflectance, for different lines of pixels of the matrix optical sensor and for a phase of an illumination function, differences are considered between an experimental reflectance and a predicted reflectance, for the same pixel line of the matrix optical sensor and for different phases of the illumination function. For example, a line of pixels is considered and its response is measured for different positions of the light line associated with a thin-line illumination function. This response consists of a profile that decreases as the distance to the thin line increases. The influence of an offset (ambient light) can be subtracted from these profiles, corresponding to a light contribution which does not decrease when the distance to the thin line increases. The absorption and reduced scattering coefficients can be obtained from such a profile, by comparing this profile with a series of profiles each associated with a pair of known values of these coefficients, and selecting the most similar profile. As a variant, calculations similar to those described above, in particular in equations (1) and (2), are used, considering differences between an experimental reflectance and a predicted reflectance. Then, as detailed above, we compare the pair of values of the absorption and reduced diffusion coefficients, and reference data. Figure 10 illustrates the scanning of the lighting devices. In Figure 10, only one lighting device is lit at a time. As a variant, a plurality of non-adjacent lighting devices are lit simultaneously, forming on the contact surface two thin luminous lines distant according to (OX) of at least 3 mm, preferably between 3 and 10 mm, and even between 3 and 5 mm . If necessary, a profile is calculated for each pixel of a line of the matrix optical sensor. A false cavity detection is thus performed on the entire surface of the matrix optical sensor. An image can be acquired directly in which each line of pixels is associated with the same distance D at the luminous thin line. To do this, it suffices to synchronize the integration frequency of the pixels of the matrix optical sensor with the scanning frequency of the illumination of the lighting devices, so that each integrated pixel line is illuminated by a thin luminous line located at this known distance D. A similar method can be implemented for a step-type illumination function. Other embodiments may be implemented, for example using illumination, or illumination, at different wavelengths so as to determine characteristic coefficients of the imprint for different wavelengths. Different illumination functions can be used, by lighting each time several series of a lighting device or several adjacent lighting devices, sufficiently distant from each other to independently study several regions of the footprint. For example, two light bands formed on the contact surface by lighting devices that are switched on simultaneously and not adjacent, are spaced according to (OX) by at least 3 mm, preferably between 3 and 10 mm, or even between 3 and 5 mm. The values of the characteristic coefficients can be determined from both the pixels associated with the peaks of the footprint, and the pixels associated with the valleys, and verify that the same conclusion is reached on the footprint. It is possible to determine the values of the characteristic coefficients for each of the pixels of the matrix optical sensor associated with the peaks of the imprint (respectively at the valleys of the imprint), or for different regions of interest on which these values are assumed to be homogeneous. Then we can make a global decision, for example with rules on the surface of the largest component detected as fraud. Alternatively, some parts of the images may be rejected. Another approach is to return a card indicating what is definitely true and what is dubious in the picture. The set of characteristic coefficients measured at different wavelengths and / or according to several light patterns and / or in different regions of interest of the image (peak pixels and valleys for example) can be used as parameters of the image. input (predictors) of one of the machine learning or deep learning algorithms known to those skilled in the art. For example, a supervised SVM (Support Vector Machine) type algorithm with a Gaussian-type kernel will be used. In a first step, a learning is performed on a database of coefficients measured on real and false fingers. In a second step, the algorithm, when submitted to it a new combination of coefficients corresponding to a finger whose authenticity is not known, returns a score function corresponding to its probability of belonging to the one and the other of the two classes (real and false fingers). Since learning is done on the intrinsic physical characteristics of the materials, and not on the response of a given instrument, it then becomes possible to extrapolate the instrument to different sensors of different sensitivity without having to re-learn.
权利要求:
Claims (15) [1" id="c-fr-0001] A method for determining whether or not a papillary impression is made of human living tissue, the impression being in direct physical contact with a contact surface (106) of a papillary impression sensor (110; 210) comprising, superimposed under the contact surface, a matrix optical sensor (112; 212), and illumination means (111; 211) formed by a plurality of lighting devices (1110; 2110) parallel to each other, characterized in that it comprises the following steps: illumination of the papillary impression by the illumination means, the illumination devices (1110; 2110; 1110A, 1110B) forming together, on the contact surface (106), at least one FIG. uniform illumination along an axis extending from one side to the other of a detection surface (125) of the matrix optical sensor, and acquisition of an image (li, h, b) by the matrix optical sensor these steps being implemented at least once; in each acquired image, selection (181) of the pixels corresponding to the valleys of the imprint, or selection of the pixels corresponding to the peaks of the imprint; and from the selected pixels, extracting an optical characteristic defining the response to the at least one Illumination, the material constituting the papillary impression, and using this optical characteristic to determine the values of at least two optical coefficients characteristics of the impression (μΑ, μ /). [2" id="c-fr-0002] 2. Method according to claim 1, characterized in that the at least two optical coefficients characteristic of the imprint comprise an absorption coefficient (μΑ) and a reduced diffusion coefficient (μ5 '). [3" id="c-fr-0003] 3. Method according to claim 1 or 2, characterized in that it further comprises a step of comparing (183) between said values and reference data (161; 661), to distinguish values associated with a papillary imprint constituted of living human tissue, and values associated with a papillary impression that is not made of human living tissue. [4" id="c-fr-0004] 4. Method according to any one of claims 1 to 3, characterized in that each illumination figure extends above the detection surface (125) of the matrix optical sensor, and consists of one or more bands (S) parallel to pixel lines of the array optical sensor (112; 212). [5" id="c-fr-0005] 5. Method according to any one of claims 1 to 4, characterized in that the illumination of the papillary impression is implemented by means of illumination means (211) arranged above the matrix optical sensor (212), and such that each lighting device (2110) consists of an organic light-emitting diode. [6" id="c-fr-0006] 6. Method according to any one of claims 1 to 5, characterized in that the values of the optical coefficients characteristic of the papillary impression [μΑ, μ $ ') are determined using a predictive model of the response of a fingerprint to a known illumination function, this model being a function of said characteristic optical coefficients, minimizing a difference between this model and an experimental measurement of the response of the papillary impression to this same illumination, obtained at the same time; help selected pixels. [7" id="c-fr-0007] 7. Method according to claim 5, characterized in that the predictive model is obtained by a convolution calculation of an illumination function associated with the at least one illumination figure, with the impulse response of a medium of which the values of said characteristic optical coefficients (μΑ, μ $) are known. [8" id="c-fr-0008] 8. Method according to any one of claims 1 to 7, characterized in that at each illumination step of the papillary impression, the illumination means (111; 211) together form at least one illumination figure. defined by a periodic illumination function along the axis of the length (L2) of the detection surface of the matrix optical sensor. [9" id="c-fr-0009] 9. Method according to claim 8, characterized in that the illumination means (111; 211) together form an illumination figure defined by a spatially periodic lighting function of the slot type. [10" id="c-fr-0010] 10. Method according to any one of claims 1 to 7, characterized in that at each illumination step, the illumination means (111; 211) together form an illumination figure defined by an illumination function. step type. [11" id="c-fr-0011] 11. Method according to any one of claims 1 to 7, characterized in that at each illumination step, the illumination means (111; 211) together form an illumination figure defined by an illumination function of thin line type. [12" id="c-fr-0012] 12. Method according to any one of claims 1 to 11, characterized in that each illumination step, the lighting devices (1110; 2110; 1110A, 1110B) are turned on and off to form successively different figures of illumination, a scanning frequency from one illumination pattern to the next being synchronized with a scanning frequency of integration of the pixel lines of the array optical sensor (112; 212; 312). [13" id="c-fr-0013] 13. Method according to claim 12, characterized in that said synchronization is implemented to achieve an illumination of the papillary impression by a sinus-type illumination function, and in that the values of said characteristic coefficients are determined. of the impression (μΑ, μ5 ') using two images acquired by the matrix optical sensor, and associated with two phase values distinct from said illumination function. [14" id="c-fr-0014] The method according to claim 12, characterized in that the scanning frequency from one illumination figure to the next is synchronized with the scanning frequency of the integration of the pixel lines of the matrix optical sensor (112; 212; 312), so as to acquire, with the aid of the matrix optical sensor, images in which each line of pixels is associated at the same distance to a particular point of the illumination figures. [15" id="c-fr-0015] 15. System (100) for implementing a method according to any one of claims 1 to 14, characterized in that it comprises: a papillary impression sensor (110; 210) comprising, superimposed, a contact surface for applying the imprint thereon, a matrix optical sensor (112; 212), and illumination means (111; 211) formed by a plurality of parallel lighting devices (1110; 2110; 1110A, 1110B). ; driving means (130), configured to turn on and off the lighting devices according to the at least one papillary impression illumination step; pixel selection means (140), configured to receive the at least one image (li, I2, I3) acquired by the matrix optical sensor, and extracting the pixels corresponding to the valleys of the imprint, or the corresponding pixels at the crests of the imprint; and means (150) for determining the values of the characteristic optical coefficients from the selected pixels.
类似技术:
公开号 | 公开日 | 专利标题 EP3394793B1|2021-05-05|Method for recognising a false papillary imprint by structure lighting EP3388975A1|2018-10-17|Device for capturing an impression of a body part EP1586074A1|2005-10-19|Person recognition method and device EP2875339A1|2015-05-27|Method and device for detecting, in particular, refreacting defects FR2484633A1|1981-12-18|METHOD AND APPARATUS FOR NON-CONTACT SURFACE PROFILE MEASUREMENT FR2925706A1|2009-06-26|DEVICE FOR EVALUATING THE SURFACE OF A TIRE. EP3097407A1|2016-11-30|Method and device for the detection in particular of refracting defects EP3388976A1|2018-10-17|Method for detecting fraud FR3064112A1|2018-09-21|OPTICAL IMAGING DEVICE WO2015091701A1|2015-06-25|Method of validation intended to validate that an element is covered by a true skin WO2016102854A1|2016-06-30|Method and system for acquiring and analysing fingerprints with fraud detection CA2643371A1|2007-08-30|Biodetector functioning without contact EP3401837A1|2018-11-14|Device for capturing fingerprints EP2988249A1|2016-02-24|Method for determining, in an image, at least one area likely to represent at least one finger of an individual FR3067493A1|2018-12-14|METHOD FOR DETECTING A FALSEPRINT EP3149655A1|2017-04-05|Method for validating the authenticity of an element of the human body EP3206160B1|2018-11-21|Method for biometric processing of images FR2924517A1|2009-06-05|Semi structured document's e.g. cheque, automatic authentification and digitization device for e.g. airport, has illumination system and sensor that are placed in zones such that light is not incident on reading surface for capturing image WO2017029455A1|2017-02-23|Biometric sensor with at least two light sources of different apparent sizes FR3046276A1|2017-06-30|METHOD FOR OBTAINING A FUSED IMAGE OF A PAPILLARY FOOTPRINT WO2020109486A1|2020-06-04|Method and system for measuring a surface of an object comprising different structures using low-coherence interferometry EP3614305A1|2020-02-26|Authentication by optical index FR3094113A1|2020-09-25|Device and method for biometric acquisition and processing FR2915008A1|2008-10-17|Body zone`s e.g. finger, living character determining method for identifying person, involves determining living or non living character of body zone based on difference between ratio of luminance and ratio predicted by reflectance model EP2927730B1|2020-03-11|Biometric image acquisition assembly with compensation filter
同族专利:
公开号 | 公开日 US10810406B2|2020-10-20| EP3394793A1|2018-10-31| US20190012513A1|2019-01-10| FR3046277B1|2018-02-16| KR20180095693A|2018-08-27| EP3394793B1|2021-05-05| WO2017108883A1|2017-06-29|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 WO2008111994A1|2006-07-19|2008-09-18|Lumidigm, Inc.|Spectral biometrics sensor| WO2008050070A2|2006-10-25|2008-05-02|Sagem Securite|Method for validating a biometrical acquisition, mainly a body imprint|US10991751B2|2017-02-21|2021-04-27|Commissariat A L'energie Atomique Et Aux Energies Alternatives|Print sensor with gallium nitride LED|US7054674B2|1996-11-19|2006-05-30|Astron Clinica Limited|Method of and apparatus for investigating tissue histology| EP2207595A4|2007-10-19|2012-10-24|Lockheed Corp|System and method for conditioning animal tissue using laser light| WO2010102099A1|2009-03-04|2010-09-10|Gradiant Research, Llc|Method and apparatus for cancer therapy| EP2562682B1|2011-08-24|2014-10-08|DERMALOG Identification Systems GmbH|Method and device for capture of a fingerprint with authenticity recognition| FR3016115B1|2014-01-06|2016-02-05|Commissariat Energie Atomique|CONTAINER INTERACTING WITH A REDUCED ENERGY CONSUMPTION USER BEFORE UNPACKING| FR3017952B1|2014-02-25|2016-05-06|Commissariat Energie Atomique|METHOD FOR DETERMINING A LIPID CONCENTRATION IN A MICROORGANISM| FR3026877B1|2014-10-03|2018-01-05|Commissariat A L'energie Atomique Et Aux Energies Alternatives|SENSOR OF DIGITAL OR PALMAIRE IMPRESSIONS| FR3032042B1|2015-01-23|2018-03-02|Commissariat A L'energie Atomique Et Aux Energies Alternatives|DEVICE FOR ACQUIRING A CHARACTERISTIC IMAGE OF A BODY| FR3035727B1|2015-04-30|2017-05-26|Commissariat Energie Atomique|SENSOR OF DIGITAL OR PALMAIRE IMPRESSIONS| FR3054696B1|2016-07-29|2019-05-17|Commissariat A L'energie Atomique Et Aux Energies Alternatives|THERMAL PATTERN SENSOR WITH MUTUALIZED HEATING ELEMENTS| FR3054697B1|2016-07-29|2019-08-30|Commissariat A L'energie Atomique Et Aux Energies Alternatives|METHOD OF CAPTURING THERMAL PATTERN WITH OPTIMIZED HEATING OF PIXELS|FR3054698B1|2016-07-29|2018-09-28|Commissariat A L'energie Atomique Et Aux Energies Alternatives|ACTIVE THERMAL PATTERN SENSOR COMPRISING A PASSIVE MATRIX OF PIXELS| CN108416247A|2017-02-09|2018-08-17|上海箩箕技术有限公司|Optical fingerprint sensor module| EP3948658A1|2019-05-07|2022-02-09|Assa Abloy Ab|Presentation attack detection| CN113343954B|2021-08-05|2021-12-28|深圳阜时科技有限公司|Offset calibration value detection method of curved surface fingerprint sensor and terminal equipment|
法律状态:
2016-12-29| PLFP| Fee payment|Year of fee payment: 2 | 2017-06-30| PLSC| Publication of the preliminary search report|Effective date: 20170630 | 2018-01-02| PLFP| Fee payment|Year of fee payment: 3 | 2019-12-31| PLFP| Fee payment|Year of fee payment: 5 | 2020-12-28| PLFP| Fee payment|Year of fee payment: 6 | 2021-12-31| PLFP| Fee payment|Year of fee payment: 7 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 FR1563180A|FR3046277B1|2015-12-23|2015-12-23|METHOD FOR RECOGNIZING FALSE FOOTPRINT BY LIGHTING STRUCTURE| FR1563180|2015-12-23|FR1563180A| FR3046277B1|2015-12-23|2015-12-23|METHOD FOR RECOGNIZING FALSE FOOTPRINT BY LIGHTING STRUCTURE| US16/064,620| US10810406B2|2015-12-23|2016-12-21|Method for recognising a false papillary print by structured lighting| KR1020187021146A| KR20180095693A|2015-12-23|2016-12-21|Method for recognizing the print of the product by structured illumination| EP16812968.2A| EP3394793B1|2015-12-23|2016-12-21|Method for recognising a false papillary imprint by structure lighting| PCT/EP2016/082066| WO2017108883A1|2015-12-23|2016-12-21|Method for recognising a false papillary imprint by structure lighting| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|