专利摘要:
The invention relates to a method for comparing two data obtained from a sensor or an interface, implemented by processing means of a processing unit, the method comprising calculating a similarity function between two data characteristic vectors to be compared, characterized in that each feature vector of a datum is modeled as a sum of Gaussian variables, said variables comprising: - an average of a class belonging to the vector, - an intrinsic deviation and an observation noise of the vector, each feature vector being associated with a quality vector comprising information on the observation noise of the feature vector, and in that the similarity function is calculated from the vectors of characteristics and associated quality vectors.
公开号:FR3028064A1
申请号:FR1460690
申请日:2014-11-05
公开日:2016-05-06
发明作者:Julien Bohne;Stephane Gentric
申请人:Morpho SA;
IPC主号:
专利说明:

[0001] FIELD OF THE INVENTION The invention relates to a method of comparing data obtained from a sensor or an interface, to establish a similarity ratio between the data. In particular, the invention relates to a method for comparing data by automatic learning. STATE OF THE ART Many tasks implemented for example in the field of computer vision (or digital vision) require the comparison of complex data such as images, to establish a similarity score between these data. For example, in the field of biometric authentication, face images of individuals are compared to each other to determine whether images have been obtained from the same individual.
[0002] To deal with this kind of problem, it is known to implement, on the data to be compared, a feature extraction that transforms the data to be compared into characteristic vectors, and then to calculate a similarity function between the characteristic vectors. The calculated similarity function generally includes previously unknown parameters. These parameters are determined and progressively optimized by automatic learning (called machine learning). To do this, a processing unit implements data comparison operations on a set of known database data, compares the results provided by the similarity function with a real result, and optimizes the parameters accordingly. similarity function to make its results more reliable. For example, according to the publication of D. Chen, X. Cao Wang, F. VVen and Sun, Bayesian Face Revisited: A Joint Formulation, in ECCV, 2012, a method for learning similarity function between data, in which the data are modeled by a sum of two independent Gaussian variables which are the average of the class to which a data belongs, and the variation of the data with respect to the mean. For example, in the case where the data are images of faces, the class corresponds to the identity of the individual, and therefore the variation with respect to the average of the class corresponds to all the changes that can occur between an image of the average face of the individual and an image taken in different circumstances: - Lighting, shadows on the image - Pose of the face on the image, - Expression of the face, - Localized blur, etc. However, the improvement of the performance of the comparison resulting from machine learning is limited by the consideration of variable quality data in the database. This results in a degradation of the performance of the similarity function developed, and therefore the quality of the comparison made. The proposed comparison method is therefore not entirely reliable.
[0003] PRESENTATION OF THE INVENTION The purpose of the invention is to propose a method for comparing data having improved performances compared to the prior art.
[0004] In this regard, the subject of the invention is a method of comparing two computer data obtained from a sensor or an interface, implemented by processing means of a processing unit, the method comprising the calculation of a similarity function between two characteristic vectors of the data to be compared, characterized in that each feature vector of a datum is modeled as a sum of Gaussian variables, said variables comprising: an average of a membership class of the vector, - an intrinsic deviation and - an observation noise of the vector, each feature vector being associated with a quality vector including information on the observation noise of the feature vector, and in that the similarity function is calculated from the feature vectors and associated quality vectors. Advantageously, but optionally, the method according to the invention may further comprise at least one of the following characteristics: the similarity function is the logarithm of the ratio between the probability density P (x, y11-1sim, SEx, SEy ) feature vectors, with the vectors belonging to the same class, and the probability density P (x, y11-1d1s, SEx, SEy) of the feature vectors, with the vectors belonging to two distinct classes. the similarity function is furthermore calculated as a function of the covariance matrices of the components of the feature vectors, and the covariance matrix of the observation noise of each feature vector is obtained as a function of the associated quality vector. The method further comprises implementing a learning algorithm for determining the covariance matrices of the means of the classes of membership of the vectors and the deviations of the vectors relative to the class means. the learning algorithm is an expectancy-maximization algorithm. - the similarity function is given by the formula: LR (x, yISEx, SEy) -1 = xT - + s + SEx) 1) X + yT (C - (Sil + Sco + SEy)) y + 2xT By - 1ogS + S "() + SEx1 - logIAI + constant where -1A = + scd + sEx - scd + SEy B = + SEy) -1 where C = + + SE) y + + SEy) -1 -1) and where So is the covariance matrix of the class means, Su is the covariance matrix of deviations from an average, and SEx and SEy are the covariance matrices of the observation noises respectively of the x and y vectors. computer generated from sensors or interfaces are data representative of physical objects or physical quantities - computer data from sensors or interfaces are images, and feature vectors are obtained by application of at least one filter on the images - the components of the vector of qualities are generated according to the type of data and the nature of the characteristics composing the vector of carac The method further comprises comparing the result of calculating the similarity function with a threshold to determine whether the data belongs to a common class. The invention also relates to a computer program product, comprising code instructions for implementing the method of the preceding description, when it is executed by processing means of a processing unit. Another object of the invention is a system comprising: a database comprising a plurality of so-called labeled data, a data acquisition unit, and a processing unit, comprising processing means adapted to construct, from two data, two feature vectors and two associated quality vectors, said processing unit being further adapted to compare the data by carrying out the comparison method according to the foregoing description. The proposed method makes it possible to take into account the qualities of the data in the calculation of the similarity function between the data. This makes it possible to establish a variable weighting between good quality data and more uncertain data. For example, in the case of the application of the method according to the invention to a comparison of images, the areas of shadows or blur of an image are not taken into account by the similarity function with a weighting also important that the areas clearly visible and clear.
[0005] This allows for increased performance in comparing data. In addition, the automatic learning makes it possible to optimize the parameters of the similarity function to improve the performance of the comparison method.
[0006] DESCRIPTION OF THE FIGURES Other characteristics, objects and advantages of the invention will emerge from the description which follows, which is purely illustrative and nonlimiting, and which should be read with reference to the appended drawings in which: FIG. 1 represents an example system adapted for carrying out a comparison method. FIG. 2 represents the main steps of a data comparison method according to one embodiment of the invention.
[0007] DETAILED DESCRIPTION OF AT LEAST ONE EMBODIMENT OF THE INVENTION Referring to FIG. 1, there is shown a system 1, comprising a processing unit 10, comprising processing means 11 for carrying out the comparison method of computer data described below. The processing unit 10 may for example be an integrated circuit, and the processing means a processor. The system 1 further advantageously comprises a database 20, possibly distant, storing a plurality of data from which the processing unit 10 can implement an automatic learning described below. The system 1 finally comprises a data acquisition unit 30, or, in the case where the data acquisition unit 30 is independent of the system, an interface (not shown) adapted to communicate with such a unit. In this way the system 1 can receive and process data b, in particular to compare them by the method described below. Depending on the nature of the data to be compared in the method described below, the data acquisition unit can be of any type, and for example an optical sensor (camera, video camera, scanner), acoustic, a fingerprint sensor, motion sensor, etc. It can also be a human-machine interface (keyboard, tablet with touch interface) for recording data entered by an operator, such as for example a text, a figure, etc. The computer data b are obtained by the acquisition unit 30, and are therefore derived from a sensor or an interface, for example a human-machine interface. They may be data representative of a physical object, for example an image, a diagram, a recording, a description, or representative of a physical quantity (electrical, mechanical, thermal, acoustic, etc.), such as for example a data recorded by a sensor.
[0008] The processing means 11 of the processing unit are advantageously configured to implement the data comparison method described below, by executing an appropriate program. To implement this method, the processing means 11 also advantageously comprise a feature extractor module 12 adapted to generate, from an input computer data item b communicated by a data acquisition unit 30, an extraction of characteristics for generating a feature vector x associated with the data as well as a quality vector qx of the data associated with the feature vector. The quality vector qx is a vector of the same size as the feature vector, each element of which indicates a quality of the information of the corresponding element of the feature vector x. His generation depends on the nature of the data b. For example, the extraction of characteristics can be implemented by applying to the data item b one or more filters designed for this purpose, where appropriate followed by a processing on the result of the filter (for example calculation of a histogram, etc. .). The generation of the quality vector depends on the type of data b and the nature of the characteristics of the characteristic vector x - that is to say the component elements vector x. Each element of the quality vector takes into account intrinsic information of the data associated with the particular characteristics of the feature vector. For example, in the field of signal processing or image processing, when the data is an image or an acquisition of a representative signal acquired by a sensor, it is common to use as a characteristic vector x a frequency representation (for example a Fourier transform) or space-frequency (for example a wavelet transform) data. Each component of the feature vector then only depends on certain frequency bands.
[0009] In these cases, the high-frequency components of the data may be more discriminating than low-frequency components, but also more sensitive to phenomena such as the presence of noise or lack of signal resolution. The amount of noise in the data can be determined by analyzing its energy spectrum, if the data is a signal acquired by a sensor, or its intrinsic resolution, if the data is an image. For example, the article by Pfenning and Kirchner is known for determining the resolution of an image: "Spectral Methods to Determine the Exact Scaling Factor of Resampled Digital Images", ISCCP, 2012.
[0010] The quality vector qx generated as a function of the characteristic vector x and the intrinsic quality of the data can then be constructed as follows: - to the components of the characteristic vector sensitive to the low-frequency components of the data, a high quality is attributed - to the components of the vector of characteristics sensitive to the high-frequency components of the data, and having a low noise level and / or a high resolution, a high quality is attributed; - to the components of the vector of characteristics sensitive to the high-frequency components; frequencies and having a high noise level and / or a low resolution, a low quality is attributed. Assigned quality values and noise level or resolution thresholds can be determined experimentally so as to optimize the performance of the comparison method on a validation basis.
[0011] In another example, the data is a face image. According to this example, a feature vector can be obtained, as shown in the article by Chen et al., "Blessing of Dimensionality: High-Dimensional Feature and Its Efficient Compression for Face Verification", VCPR, 2013, concatenating descriptors local extracts in the vicinity of certain semantic points of the face (eg tip of the nose, commissures of the lips, eyes, etc.). This representation has the advantage of being more robust to pose variations than the methods that extract the descriptors on a regular grid.
[0012] However, the extraction of these characteristics comprises a step of detecting these points. During this step, it is possible to use a detector which, in addition to providing the most probable position of each point of the face on the image, also provides information conveying confidence in the accuracy of the detection. For example, the article by Rapp et al., Blessing of Dimensionality: "Multiple kernel learning SVM and statistical validation for facial landmark detection," Automatic Face & Gesture Recognition, 2011 a measure of the distance to the hyperplane separator in the case of a detector based on support vector machines (in English SVM for Support Vector Machine). Another example is produced in the article by Dantone et al. "Real-time Facial Feature Detection Using Conditional Rregression Forests", CVPR, 2012, in which a measure of confidence is provided by a number of votes, for a detector using a regression tree forest. used to create a quality associated with each component of the feature vector, attributing to it the quality of the detection of the semantic point of the face to which it corresponds, according to yet another example, when the image of the face is a front image generated at from an image in which the face is not facing, for example by applying the method described in application No. FR 2 998 402, the quality vector can be a confidence index, this index being relatively higher for points of the face appearing on the original image, and relatively lower for points of the face not appearing on the original image, and reconstructed by extrapol ation.
[0013] More generally, when the data is an image, the quality vector can be obtained by a local measure of blur. Alternatively, the feature extractor module is a module of the acquisition unit 30, allowing the acquisition unit to directly communicate to the processing means 11 a feature vector and an associated quality vector. Data comparison method The data comparison method implemented by the processing means 11 of the processing unit will now be described with reference to FIG. This method comprises the comparison of two data, by calculating a similarity function 100 between two vectors of characteristics x and y of the same dimension respectively obtained from the data, and by performing an automatic learning 200 of the parameters of the similarity function on an database. In this process, each feature vector is modeled as the sum of three independent Gaussian variables: x =, u + + E Where: - p is the average of a class to which the vector belongs x, - w is the intrinsic deviation of the vector x in relation to the mean, and - e is the observation noise.
[0014] A class is a set of feature vectors that are considered similar. Two characteristic vectors are considered similar if their comparison by the similarity function produces a result greater than a threshold, this threshold being determined empirically. For example, in the case where the data are face images, a class advantageously corresponds to an individual. Thus, by comparing two feature vectors of several data, the data are considered similar if they come from the same individual. To return to the model introduced previously, two vectors of characteristics belonging to the same class thus have a value of p identical, but values of w and of e different. In cases where the feature vectors belong to different classes, the three variables are completely independent.
[0015] These three variables are considered to follow a multivariate normal distribution centered on 0, and we note the respective covariance matrices So, Sw, and SE. So, Su, are unknowns that are common to all feature vectors. SE is instead known, because it is obtained from the quality vector associated with the feature vector, by the feature extractor module. It is specific to each feature vector. For example, assuming that observation noises are not correlated with each other, SE can be well approximated by a diagonal matrix. The elements of this diagonal matrix, corresponding then to the variance of the components of the quality vector, can be obtained from this vector. For example, the variance can be imposed by applying to the components of the quality vector qx a sigmoid function of type f (qx) = 1 / eaclx + b. The coefficients a and b can be chosen to associate a given level of variance with a quality level. For example, high quality may be associated with zero variance, very low quality may be associated with maximum variance, intermediate variances are intermediate qualities.
[0016] In the following, we write SEx the covariance matrix of the background noise of the vector x, obtained from the quality vector qx and SEy the covariance matrix of the background noise of the vector y, obtained from the quality vector qy. We also note the hypothesis that two feature vectors belong to the same class, ie that the corresponding data are considered to be similar, and Hdis the inverse hypothesis that feature vectors belong to classes, and the corresponding data are considered dissimilar. The joint probability of generating x and y, knowing their respective background noise covariance matrices and considering the hypothesis Hs ,, 'is denoted P (x, yksim, S Ex, S Ey). This probability follows a Gaussian law whose center is zero and the covariance matrix is SE ,, ': P (x, sim, SS Ey) = .7VMxT ylT IO, Ssim) The joint probability of generating x and y, knowing their respective background covariance matrices and considering the hypothesis Hdis, is noted P (x, y11-1dis, SEx, SEy). This probability follows a Gaussian law whose center is zero and the covariance martis is Sdis .P (x, y11-1d1s, SEx, S, y) = Maxi 'ylT IO- - Sdis, 1- The matrices Ssim and Sdis are defined as follows: [Sil + Sco + Sex SII 1 Seim = Sil Sil + Sco + Sey [Sil + Sco + Sex 0 Sdis - 0 Sil +5 (O +5 The probability density of P (x, y11-1sim , SEx, SEy) is in a known way in, sunx is, where ISs ,,, I is the determinant of Ssim, and N is the (271.) N / 2 ssun 1/2 dimension of a characteristic vector.
[0017] The same expression is applicable mutatis mutandis for the probability density of P (x, ykdis, SEx, S Ey). The similarity function computed for comparing the two data corresponding to the x and y vectors is the logarithm of the ratio between the probability density of the characteristic vectors, with the vectors belonging to the same class, and the probability density of the characteristic vectors, with the vectors belonging to two distinct classes. The similarity function is thus expressed as follows: P (x, S Ex, S Ey) LR (x, S Ey) = log (P (x, y1Hdis, Sex, Sey) Using the expression of the density of probability, and, when developing the function, using the block inversion formula to invert the Ssim and Sdie matrices, the similarity function obtained is expressed as follows: LR (x, yISE, c , SEy) -1 = xT - + s + SEx) 1) X + yT (C - (Sil + + SEy)) y + 2xT By - 1ogS + SO, + SEx1 - logIAI + constant In this expression, A, B and C are terms resulting from the block inversion of Ssjm and are respectively expressed as follows: -1A = + + s - + SEy B = + SEy) -1 -1 C = + + SE) y + + SEy) The constant does not depend on x, y, SExor SEyet can be ignored. It is therefore found that the similarity function LR takes into account the covariance matrices SEx and SEy of the observation noises of x and y, and therefore of the quality vector associated with each feature vector. Thus, the result of the comparison is impacted by the quality - or confidence - associated with a characteristic vector, which makes it possible to weight with a lower importance a characteristic considered of poor quality or uncertain, and to weight with greater importance. a characteristic of good quality or with greater confidence. As we see in the following, this similarity function is further parameterized by an automatic learning. However, taking into account the quality associated with a feature vector makes it possible to minimize the impact of poor quality data on the parameterization of the function. The comparison method is therefore more reliable. Returning to FIG. 2, the result of the comparison is then compared during a step 110 with a determined threshold. This threshold is advantageously established empirically by implementing, on vectors of known characteristics (which are known whether they belong to the same class or not) of a database, a large number of comparisons. -1) If the result of the similarity function applied to x and y is greater than the determined threshold, the corresponding data is considered to be similar. Otherwise, the data is considered dissimilar. The expression of the similarity function LR indicated previously shows that this function is parameterized by the covariance matrices So, Sw, which are unknown. Therefore, the method comprises a step 200 of determining said matrices by automatic learning. This method is advantageously implemented by an algorithm of the expectation-maximization type (known in English under the acronym EM or "Expectation-Maximization algorithm") and is implemented on a set of data stored in the database 20, these data being said to be "labeled", that is to say we know the respective classes to which they belong. We denote by c a class to which belongs a number mc of characteristic vectors, and we denote X, = [x, Ta, x1,: mcf the concatenation of the characteristic vectors of the class and SES ,,, mclers respective matrices of covariance. observation noise. We define the latent variables Z, = [Tic, co, Ta, co ": mcf for each class c, in which each 14 is the average of the class, so there exists only one, and each coc, 1 is the deviation of a vector of characteristics of the class with respect to this average (there is therefore one for each characteristic vector) The parameter to be estimated by the EM algorithm is 0 = The expectation-maximization algorithm is an iterative algorithm comprising a first step 210 for estimating the parameters of the latent variable distribution P (Z, IX ,, 0), where 0 is the previous estimation of the parameters At the initialization of the method, 0 is a first estimate empirical parameters.
[0018] The initialization of the Sliest parameter advantageously implemented by calculating, for each class c, the empirical average of the class, then by determining a covariance matrix of means.
[0019] The initialization of the parameter S0 can be implemented by calculating, for each class, the covariance matrix of the characteristic vectors to which the class mean is subtracted (that is to say, the differences of the vectors of characteristics with respect to the mean), then calculating, the average covariance matrix on all classes. Then, the algorithm comprises a step 220 of maximization, following O of the expectation of the logarithmic probability on the latent variables Zc: Q (0, 0) = / 3 (Zc 1Xc, O) logP (Xc, Zcle) dZc To carry out this step by minimizing the computation time, we take into account the fact that the latent variables coc, 1 are conditionally independent to pc fixed by factoring P (Z1Xc, Si1, S (d) as follows: mc 13 (ZcP (c, Sii, S ,,) = P (cp, i1xc, i, lic, Sii, S,)) The optimization of 9 (0, 0) in step 220 requires calculation of the probability distributions parameters P (Ficp (c, SS,) and P (coc, ipcc, i, lic, Sii, SJ) These calculations are detailed below: P (1.1c1Xc, gli, g,)) oc P (Xc I t) P (1.1c mc 117 (xc, 11 Sco SEC, i) N (lic I 0, Sii) i = 1 -1 - * Ts, -11-Lc + Eim = ci (lic-xc, i) T (sw + s8c, i) (1-Lc-x)) oc e- (1-tc172- (1-tc-bp, c)) (1) where: mc -1 T = +1 1- (S, ) + S Ec, i) 1 = 1 And fleic = (S + lxc, i The combination of equation (1) and the fact that P (141, (c, SS,) is a probability distribution implied by That P (Ficlxc, Sii, S,) = N (lic océ Moreover, P (co c, i1X, g cd) = fSw) dlc .7 "141 Tlic) N (coc, io, gc,) ## EQU1 ## where R, 1 = (SEC: 1- + SO) 1) ## STR13 ## P (c, Sii, S,)) = (co., (Xc, / Rc, / SEc, Rc, i) = JV (co, ,, bcdc ,,, Twc, i) Where: T = R Thus, step 220 consists in maximizing, with respect to Sp and Sw: nb class, (0, e) 1c-1-S -1 -1 + R c, 1c, 1 and bcdc, i = -1 blic. 13 (Z1) (c1, Si1, S cd) (Lg Pco + log P (1.1c1Sii) i = t + logP (coc, IS,))) dZc (o, e) to 0 ( o, é) This is done by calculating the gradients and solving = 0 and = O. as (nb classes e (0, 0) 1 f 13 (Z clXc, -0) d log P (lic ISii) clZc. dS dS 1-tc = 1 1-t nb classes fp (iicixc, o) d log P iic SI, = 1 dS dlic 1-tc = 1 nb classes 1 fP (iic Ixc, 0) (sp, -1 - Sil -1) 4c c = 1 1 nbclasses x S-1 - S -1 nb classes f13 (11c1Xc, 0) 11clicT oc - -2 1-t 1-tc = 1 1 nb classes (Tlic + blicblicT) SI ") oc - -2 nbclasses x S-1 - S 1 c = 1 1-t (0, é) 1 nb classes (Tlic + blicblicT) = 0 S = c = 1 dS 1-t nb classes 1-t Moreover nb class , a0 (0, é) V mc 13 (ZclXc) d log P (cp, i1S (d) clZc.L, c = 1 ôSûj i = 1 nb class, 1_ og P (cp, i1S, d) c = 1 ## EQU1 ## (Pc, i1xc, i, lic, S, d) i = 1 ff p (coc ,,, xc, ,,, ,, c, Scd) due i = 1 i = i d log P (coc ilS (0) nb class, Ic = iimc P (P-c1) (c, Sco) dlic i = 1 nb class, mc f P (C0c, i1) (c, gg (d) d log P (cbc, cdc) c = 1 i = 1 nb classes nb classes, 1 mc c = 1 c = 1 i = 1 - fP (coc, i1Xc, Sii, S, d) coc ,, ck, Tdcoc ,, S,) - 1 1 nb classes - nb classes, (Ta) c = 1 mc c, i coc, i coc, ic = 1 i = 1 a0 (0, e) nb classes, mc (Tcdc, i + bcdc, ibcdc, iT) = o scd = classes c = 1 -ci = 1 The expectation-maximization algorithm is implemented iteratively by calculating su ccessively in step 210 the variables T Tcdc and bcdc, by adapting the values of Sp and S, - and thus from 0 - in the course of step 220 to convergence. At each new iteration of step 210, the new value of 0 obtained in the preceding step 220 is reused.
权利要求:
Claims (12)
[0001]
REVENDICATIONS1. Method for comparing two data obtained from a sensor or an interface (30), implemented by processing means (11) of a processing unit (10), the method comprising calculating (100) d a function of similarity between two characteristic vectors (x, y) of the data to be compared, characterized in that each characteristic vector (x, y) of a datum is modeled as a sum of Gaussian variables, said variables comprising: an average (p) of a class of membership of the vector, an intrinsic deviation (w) and a noise (e) of observation of the vector, each feature vector (x, y) being associated with a vector of qualities (qx, qy) including information on the observation noise of the feature vector, and that the similarity function is computed from the feature vectors and associated quality vectors.
[0002]
2. A comparison method according to claim 1, wherein the similarity function (LR) is the logarithm of the ratio of the probability density P (x, yllisim, S1, S, y) of the feature vectors, with the vectors belonging to the same class, and the probability density P (x, y 11-1dis, SEx, SEy) of the feature vectors, with the vectors belonging to two distinct classes. 25
[0003]
3. Comparison method according to one of claims 1 or 2, wherein the similarity function (LR) is further calculated according to the covariance matrices (Su, Sw, SEx S,) of the components of the feature vectors, and the covariance matrix of the observation noise (SEx SEy) of each feature vector is obtained according to the associated quality vector.
[0004]
4. Comparison method according to claim 3, further comprising the implementation (200) of a learning algorithm for the determination of covariance matrices (Su, Sw) means of classes of membership of vectors and deviations vectors versus class means.
[0005]
5. Comparison method according to claim 4, wherein the learning algorithm is an expectancy-maximization algorithm.
[0006]
6. Comparative method according to one of the preceding claims, wherein the similarity function is provided by the formula: LR (x, yISEx, SEy) -1 = xT (A- + s + SEx) 1) X + yT (C - (Sil + Sco + SEy)) y + 2xT By - S "() SExl - logIAI + constant where -1A = + s + s - scd + SEy B = + SEy) -1 C = + + SE) y + + SEy) -1 -1) and where So is the covariance matrix of the class means, Su is the covariance matrix of deviations from an average, and SEx and SEy are the covariance matrices observation noises respectively vectors x and y.
[0007]
7. A method of comparison according to one of the preceding claims, wherein the computer data from sensors or interfaces are data representative of physical objects or physical quantities.
[0008]
8. A method of comparison according to claim 7, wherein the computer data from sensors or interfaces are images, and the feature vectors are obtained by applying at least one filter on the images.
[0009]
9. A method of comparison according to one of the preceding claims, wherein the components of the quality vector are generated according to the type of data and the nature of the characteristics of the feature vector.
[0010]
10. A method of comparison according to one of the preceding claims, further comprising comparing (110) the result of calculating the similarity function (LR) to a threshold to determine if the data belong to a common class.
[0011]
11. Computer program product, comprising code instructions for implementing the method according to one of the preceding claims, when it is executed by processing means (11) of a processing unit (10).
[0012]
A system (1) comprising: - a database (20) comprising a plurality of so-called labeled data, - a data acquisition unit (30), and - a processing unit (10), comprising processing (11) adapted to construct, from two data, two vectors of characteristics (x, y) and two quality vectors (qx, qy) associated, said processing unit being further adapted to compare the data by setting process according to one of claims 1 to 10.
类似技术:
公开号 | 公开日 | 专利标题
EP3018615B1|2019-02-27|Improved data-comparison method
US7269560B2|2007-09-11|Speech detection and enhancement using audio/video fusion
EP3608834A1|2020-02-12|Method for analysing a fingerprint
EP2140401B1|2011-05-25|Method of comparing images, notably for iris recognition, implementing at least one quality measurement determined by applying a statistical learning model
EP1864242A1|2007-12-12|Method of identifying faces from face images and corresponding device and computer program
EP2751739B1|2015-03-04|Detection of fraud for access control system of biometric type
FR2998402A1|2014-05-23|METHOD FOR GENERATING A FACE MODEL IN THREE DIMENSIONS
EP2754088B1|2014-12-17|Identification through iris recognition
FR2965377A1|2012-03-30|METHOD FOR CLASSIFYING BIOMETRIC DATA
Memon et al.2017|Audio-visual biometric authentication for secured access into personal devices
FR3005777A1|2014-11-21|METHOD OF VISUAL VOICE RECOGNITION WITH SELECTION OF GROUPS OF POINTS OF INTEREST THE MOST RELEVANT
FR3082645A1|2019-12-20|METHOD FOR LEARNING PARAMETERS OF A CONVOLUTION NEURON NETWORK
EP3842969A1|2021-06-30|Method and system for biometric identification and authentication with audiovisual template
Usoltsev et al.2016|Full Video Processing for Mobile Audio-Visual Identity Verification.
Çakmak et al.2019|Audio Captcha Recognition Using RastaPLP Features by SVM
Debnath et al.2020|User Authentication System Based on Speech and Cascade Hybrid Facial Feature
Nainan et al.2019|Synergy in voice and lip movement for automatic person recognition
EP3866064A1|2021-08-18|Method for authentication or identification of an individual
Carrasco et al.2007|Bimodal biometric person identification system under perturbations
Elmir et al.2013|A hierarchical fusion strategy based multimodal biometric system
Debnath et al.2019|Multi-modal authentication system based on audio-visual data
FR3106678A1|2021-07-30|Biometric processing including a penalty for a match score
Hiremath et al.2014|RADON transform and PCA based 3D face recognition using KNN and SVM
EP2856391B1|2019-03-27|Method for detecting drop shadows on an initial image
Dandu et al.2017|2D Spectral Subtraction for Noise Suppression in Fingerprint Images
同族专利:
公开号 | 公开日
CN105590020B|2021-04-20|
EP3018615A1|2016-05-11|
KR20160053833A|2016-05-13|
FR3028064B1|2016-11-04|
EP3018615B1|2019-02-27|
JP6603548B2|2019-11-06|
US10332016B2|2019-06-25|
US20160125308A1|2016-05-05|
JP2016091566A|2016-05-23|
CN105590020A|2016-05-18|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

US8463718B2|2000-08-07|2013-06-11|Health Discovery Corporation|Support vector machine-based method for analysis of spectral data|
US6782362B1|2000-04-27|2004-08-24|Microsoft Corporation|Speech recognition method and apparatus utilizing segment models|
US7171356B2|2002-06-28|2007-01-30|Intel Corporation|Low-power noise characterization over a distributed speech recognition channel|
US8868555B2|2006-07-31|2014-10-21|Ricoh Co., Ltd.|Computation of a recongnizability score for image retrieval|
US7822605B2|2006-10-19|2010-10-26|Nice Systems Ltd.|Method and apparatus for large population speaker identification in telephone interactions|
US8126197B2|2007-11-29|2012-02-28|Certifi-Media Inc.|Method for image quality assessment using quality vectors|
US7885794B2|2007-11-30|2011-02-08|Xerox Corporation|Object comparison, retrieval, and categorization methods and apparatuses|
CN102236788B|2010-04-20|2015-09-02|荣科科技股份有限公司|Power meter automatic distinguishing method for image|
CN102354389B|2011-09-23|2013-07-31|河海大学|Visual-saliency-based image non-watermark algorithm and image copyright authentication method|
US8548497B2|2011-12-16|2013-10-01|Microsoft Corporation|Indoor localization using commercial frequency-modulated signals|
CN102693321A|2012-06-04|2012-09-26|常州南京大学高新技术研究院|Cross-media information analysis and retrieval method|
CN102880861B|2012-09-05|2015-05-27|西安电子科技大学|High-spectrum image classification method based on linear prediction cepstrum coefficient|
US20140107521A1|2012-10-12|2014-04-17|Case Western Reserve University|Functional brain connectivity and background noise as biomarkers for cognitive impairment and epilepsy|
FR2998402B1|2012-11-20|2014-11-14|Morpho|METHOD FOR GENERATING A FACE MODEL IN THREE DIMENSIONS|
EP2924543B1|2014-03-24|2019-12-04|Tata Consultancy Services Limited|Action based activity determination system and method|
CN103886106B|2014-04-14|2017-02-22|北京工业大学|Remote sensing image safe-retrieval method based on spectral feature protection|US9405965B2|2014-11-07|2016-08-02|Noblis, Inc.|Vector-based face recognition algorithm and image search system|
CN106803054B|2015-11-26|2019-04-23|腾讯科技(深圳)有限公司|Faceform's matrix training method and device|
US10255703B2|2015-12-18|2019-04-09|Ebay Inc.|Original image generation system|
US11120069B2|2016-07-21|2021-09-14|International Business Machines Corporation|Graph-based online image queries|
US10482336B2|2016-10-07|2019-11-19|Noblis, Inc.|Face recognition and image search system using sparse feature vectors, compact binary vectors, and sub-linear search|
JP6477943B1|2018-02-27|2019-03-06|オムロン株式会社|Metadata generation apparatus, metadata generation method and program|
WO2021125616A1|2019-12-19|2021-06-24|이향룡|Method for detecting object data for training for and application of ai, and system for same|
法律状态:
2015-10-23| PLFP| Fee payment|Year of fee payment: 2 |
2016-05-06| PLSC| Publication of the preliminary search report|Effective date: 20160506 |
2016-10-24| PLFP| Fee payment|Year of fee payment: 3 |
2017-10-20| PLFP| Fee payment|Year of fee payment: 4 |
2018-10-24| PLFP| Fee payment|Year of fee payment: 5 |
2019-10-22| PLFP| Fee payment|Year of fee payment: 6 |
2020-10-21| PLFP| Fee payment|Year of fee payment: 7 |
2021-10-20| PLFP| Fee payment|Year of fee payment: 8 |
优先权:
申请号 | 申请日 | 专利标题
FR1460690A|FR3028064B1|2014-11-05|2014-11-05|IMPROVED DATA COMPARISON METHOD|FR1460690A| FR3028064B1|2014-11-05|2014-11-05|IMPROVED DATA COMPARISON METHOD|
US14/931,606| US10332016B2|2014-11-05|2015-11-03|Comparison of feature vectors of data using similarity function|
EP15192756.3A| EP3018615B1|2014-11-05|2015-11-03|Improved data-comparison method|
JP2015216665A| JP6603548B2|2014-11-05|2015-11-04|Improved data comparison method|
CN201510747116.6A| CN105590020B|2014-11-05|2015-11-05|Improved data comparison method|
KR1020150155232A| KR20160053833A|2014-11-05|2015-11-05|Improved data comparison method|
[返回顶部]