专利摘要:
The description relates in particular to a method of temporal denoising of image sequence using at least two different types of sensors. The description also relates to an optronic device, a computer program and a storage medium for implementing such a method.
公开号:FR3018147A1
申请号:FR1451719
申请日:2014-03-03
公开日:2015-09-04
发明作者:Gregoire Bezot;Joel Budin
申请人:Sagem Defense Securite SA;
IPC主号:
专利说明:

[0001] The invention relates to optimized video denoising techniques. A heterogeneous multisensor imaging system is a video capture system having at least two different types of sensors (the system is a bi-sensor system if it has two sensors). A video sequence is considered to be a sequence of images taken at given times. Under difficult imaging conditions, for example in low light conditions for a day sensor or, for example, in wet weather for a strip 3 thermal sensor, it is useful to perform video denoising in order to improve the picture quality. signal-to-noise ratio, and in particular to realize a temporal denoising. A temporal denoising technique takes advantage of a temporal a priori on a video signal. A temporal denoising is very easily applicable to a video captured by a still sensor filming a stationary scene. Indeed we can simply regulate the signal along the time axis of the video (for example using a sliding average on a number of consecutive images). However, in real-life situations, the sensor and / or the scene are often in motion (whether for example displacements or vibrations), and a simple temporal regularization generates a blur in the video denoised (because the different averaged elements do not necessarily all correspond to the same element of the scene). For example, a sensor in a moving vehicle can film a scene in which other vehicles are moving. It is possible to use more complex temporal denoising techniques using, for example, motion estimation techniques, which aim to allow an alignment of the pixels of the images of the video on a common reference frame, and thus to allow the practice to be practiced. a temporal regularization without generating blur. In the state of the art of temporal denoising techniques, the motion analysis phase and the temporal regularization phase (which corresponds to an average of several images) are distinguished, the former being used to compensate for the fact that it is not possible to average images if the sensor obtaining these images (and / or all or part of the images themselves) is in motion.
[0002] However, state-of-the-art techniques show their limitations in the context of low signal-to-noise ratios. Indeed, under these conditions, the motion analysis is highly compromised, and it is very difficult to denoise while preserving the useful signal. In such situations, the setting of these techniques is difficult. Generally, according to this parameterization, either the technique is completely inefficient or it degrades the useful signal very strongly. The invention therefore aims to improve the situation.
[0003] One aspect of the invention relates to a method of temporal denoising of an image sequence, said method comprising: / a / a capture, using a first sensor, of a sequence of first images corresponding to a given scene, each first image being divided into elements each associated with a corresponding sector of said first image, / b / a capture, with the aid of a second sensor of a type different from the type of the first sensor, of a sequence of second images corresponding to said given scene, each second image corresponding to a first image, each second image being divided into elements each associated with a corresponding sector of said second image, each pair of element and associated sector of the second image corresponding to a pair of element and associated sector of the first corresponding image, / c / obtaining, by a computing circuit, a first sequence of images resulting from the se quence of first images and a second sequence of images resulting from the sequence of second images, / d / obtaining, by a calculation circuit, for each sector of each of the images of the first and second image sequences d an associated weighting, / e / obtaining, by a calculation circuit, of a first weighted sequence of images, each element of each image being equal to the corresponding element of the first image sequence weighted by the weighting associated with the sector associated with said corresponding element, and a second weighted sequence of images, each element of each image being equal to the corresponding element of the second image sequence weighted by the weighting associated with the sector associated with said element corresponding, / f / obtaining, by a computing circuit, an improved image sequence resulting from a combination of image sequences comprising the first weighted sequence image frame and the second image-weighted sequence, / g / obtaining, by a computing circuit, a motion estimation based on the obtained improved image sequence, / h / a obtaining, by a calculation circuit, according to the motion estimation obtained, a spatial alignment of the images of a sequence of images to be displayed resulting from image sequences corresponding to the given scene and comprising the sequence of first images and the sequence of second images, / i / temporal denoising, by a calculation circuit, according to the determined spatial alignment, of the sequence of images to be displayed. This method is advantageous in particular in that it combines the information from at least two sensors, and in that it thus makes it possible to optimize the video denoising by performing a motion estimation on the basis of the best information. available. Since the two sensors are of different types, it can be expected that, depending on the case, one or the other is more appropriate, and the method thus makes it possible to take into account the best of the two sensors according to the context. The method thus proceeds to a temporal denoising on the basis of a dynamic image processing based on their actual content (and not only on a priori concerning the theoretical content of the images). For example, in the case of sensors operating in different spectral bands, the "fusion" of heterogeneous information coming from the two spectral bands makes it possible to optimize the video denoising with respect to a denoising performed on each of the two spectral bands taken separately from each other. using conventional techniques. Another aspect of the invention relates to a computer program comprising a sequence of instructions which, when executed by a processor, lead the processor to implement a method according to the aforementioned aspect of the invention. Another aspect of the invention relates to a computer-readable, non-transitory storage medium, said medium storing a computer program according to the aforementioned aspect of the invention. Another aspect of the invention relates to an optoelectronic device for temporal denoising of a sequence of images, said optronic device comprising: a first sensor arranged to capture a sequence of first images corresponding to a given scene, each first image being divided in elements each associated with a corresponding sector of said first image, a second sensor, of a type different from the type of the first sensor, arranged to capture a sequence of second images corresponding to said given scene, each second image corresponding to a first image, each second image being divided into elements each associated with a corresponding sector of said second image, each pair of element 30 and associated sector of the second image corresponding to a pair of element and associated sector of the first corresponding image, a computing circuit arranged to obtain a first image sequence a result of the sequence of first images and a second sequence of images from the sequence of second images, a calculation circuit arranged to obtain, for each sector of each of the images of the first and second image sequences, an associated weighting, a calculation circuit arranged to obtain a first weighted sequence of images, each element of each image being equal to the corresponding element of the first image sequence weighted by the weighting associated with the sector associated with said corresponding element and a second sequence weighted image, wherein each element of each image is equal to the corresponding element of the second image sequence weighted by the weighting associated with the sector associated with said corresponding element, a computing circuit arranged to obtain an improved image sequence resulting from a combination of image sequences including the first weighted sequence of images and the second weighted sequence of images, a calculation circuit arranged to obtain a motion estimation on the basis of the improved image sequence obtained, a calculation circuit arranged to obtain, according to the estimation of obtained motion, a spatial alignment of the images of a sequence of images to be displayed resulting from sequences of images corresponding to the given scene and comprising the sequence of first images and the sequence of second images, a calculation circuit arranged to perform a temporal denoising, according to the spatial alignment obtained, the sequence of images to display. This optronic device is particularly advantageous in that it allows the implementation of a method according to one aspect of the invention.
[0004] Other aspects, objects and advantages of the invention will appear on reading the description of some of its embodiments. The invention will also be better understood with the aid of the drawings, in which: FIG. 1 illustrates an example of an optoelectronic device according to a possible embodiment of the invention; FIG. 2 illustrates various steps implemented by the optronic device of FIG. 1.
[0005] According to a first embodiment, a temporal denoising method of an image sequence comprises a capture, using a first sensor, of a sequence of first images A (x, y, t) corresponding to a given scene. The scene corresponds to the field of vision of the first sensor. The parameters x, y and t respectively denote the spatial index of the columns, the spatial index of the lines, and the temporal index of the image number of the sequence. According to one possible implementation, the first sensor is an optronic sensor, operating in the visible range of the electromagnetic spectrum, or in the infrared or the ultraviolet. The sequence of first images may include, for example, 1000 images taken for ten seconds at fixed intervals of 10ms. Other values are obviously possible. According to the first embodiment, each first image (of the sequence of first images A (x, y, t)) is divided into elements each associated with a corresponding sector of said first image. According to one possible implementation, each element is a pixel of the image, that is to say the smallest part of image that the sensor is able to detect. The term "pixel" is an acronym now passed in the French language, resulting from the contraction of the words "picture" for image and "element" for element. According to another possible implementation, each element is a group of pixels (for example a group of three pixels respectively corresponding to the colors red, green and blue and together representing a color). According to one implementation, the intersection of the elements is zero two by two (for all the elements of the image) and the meeting of the elements corresponds to the complete image. Each image of the sequence can thus be divided into a million pixels (and corresponding to a square of a thousand pixels per thousand pixels) each representing an element, it being understood that other image resolutions are of course possible. Thus, in the expression A (x, y, t) of this example, x denotes the abscissa of a pixel, which can vary between 0 and 999, y denotes the ordinate of a pixel, which can vary between 0 and 999, and t denotes the moment at which the image of the sequence was captured, and may vary between 0ms and 999ms.
[0006] According to one possible implementation, each element is a pixel of the image, and each associated sector is a rectangle of the image centered on the pixel in question. For example the sector can be a rectangle of 2 * n + 1 by 2 * p + 1 pixels (n and p being positive integers), 2 * n + 1 denoting the width of the sector and 2 * p + 1 its height, the sector being centered on the pixel considered. For example, n = p = 5 means that the area is a square of 11 pixels apart. Thus, for a pixel of coordinates (N, P), the associated sector comprises all the pixels whose abscissa is between N-n and N + n and whose ordinate is between P-p and P + p. For pixels within n pixels of a side edge (left or right) of the image, or less than p pixels of an upper or lower edge of the image (ie within a peripheral area of the image), the sector can be truncated accordingly. The sector can thus, in the peripheral zone of the indicated image, go down to a dimension of n + 1 by p + 1 pixels (case of the four corners of the image). Alternatively, the calculations that follow can be done by ignoring this peripheral zone (usually negligible) of the image (to avoid that the sectors then indicate a somewhat biased information that one may prefer not to take into account) that is to say by working on images whose edges are cropped (private images of this peripheral zone of p pixels at the top, p pixels at the bottom, n pixels at the left and n pixels at the right). Other types of sectors are possible, for example the sector corresponding to an element can be a disk centered on this element. This disk is possibly truncated (when the element is distant from an edge of the image by a distance less than the diameter of the disk), unless, alternatively, the peripheral zone of the screen (for which the disk should otherwise be truncated) is excluded as was envisioned in the previous example. Other shapes (disc or rectangle) are also possible. According to a simplified implementation (which may require fewer resources including memory and processor) but a priori less efficient, the image is divided into sectors two by two devoid of common intersection (the intersection being the empty set) and of which the union represents the entire image, each element belonging to one (only) of the sectors thus defined. For example, an image of 10001 000 pixels can be divided into 10000 sectors of 10 * 10 pixels. Each element (for example a pixel) is associated with the sector to which it belongs. This amounts to replacing a "sliding sector" with a fixed sector, which is therefore less representative for the elements located near its periphery than for those located near its center. This simplified implementation is likely to introduce artifacts but can nevertheless improve the situation. According to the first embodiment, the method comprises capturing, with the aid of a second sensor of a type different from the type of the first sensor, a sequence of second images B (x, y, t) corresponding to said scene. given. According to one possible implementation, the field of vision of the second sensor is strictly identical to that of the first sensor. The two sensors can for example operate via a common optics and a prism (or similar mechanism) can separate the image to each sensor (other implementations are possible). According to one possible implementation, the two aforementioned sensors (which can be rigidly secured) are each associated with a distinct optics. Especially in this case, the field of vision of the two sensors may be slightly different. The field of view of the second sensor may in particular be slightly translated relative to that of the first sensor, and be affected by a parallax. However, known techniques can correct these phenomena and define the common part of the respective fields of view of the two sensors (this common part can then define the scene). For example, with regard to parallax, it is possible to apply digital processing to the captured data, or alternatively (possibly in addition) to "squint" differently the two sensors according to the distance of an object observed (mechanical solution with at least one movable sensor whose movement is controlled by servomotor). According to one possible implementation, any non-common parts of the images of the sequences of first and second images (that is to say parts of the scene that would be seen by one sensor but not the other) are ignored when it is a question of determining the improved image 1 (x, y, t) which will be described below. According to another implementation, the improved image 1 (x, y, t) determined below having ignored the non-common parts is subsequently completed by the non-common parts to provide additional information potentially relevant to the motion estimation. According to one possible implementation, the second sensor is, like the first sensor, an optronic sensor, operating in the visible range of the electromagnetic spectrum, or in the infrared or the ultraviolet. However, the two sensors are of different types. The sensors may be of different types in that they operate in different spectral bands. For example one sensor can operate in the visible spectrum and another in the band 3-5um infrared spectrum or 8-12pm band. In general, the different spectral bands can be in particular: the ultraviolet band, the visible band, the band very close to the infrared (at the edge of the visible), the "band 1" (near infrared, from 1 to 3 pm this band being used for the detection of bodies carried at high temperatures, for example from 1000 to 2000 K, the "band 2" (medium infrared, making it possible to detect bodies brought to an average temperature of 600 K, for example airplanes) , or the "band 3" (far infrared, to detect bodies around 300K, which (as also the band 2) can be used for night vision.In one possible implementation, two sensors do not are not considered to be of different types simply because they operate in different spectral sub-bands within one of the spectral bands cited in the two preceding sentences, for example, sensors each operating in a sub-band distinct from a given band (for example the aforementioned band 2) are considered to be of the same type, to the extent (but only to the extent) where the vision properties in these two sub-bands are relatively homogeneous ( that is to say that if a sensor sees well, then the other also sees relatively well and if one sees badly then the other also looks relatively bad). It is considered that a sensor "sees well" an object of the scene if it captures this object with a good signal-to-noise ratio (and conversely it sees badly if the signal-to-noise ratio is bad, and therefore if the object is difficult to distinguish from noise). By way of another example, according to one possible implementation in which the color is not truly discriminating (taking into account the use envisaged for the method), and all other characteristics of the sensors being equal, a sensing sensor the red light and another catching the blue (or green) light both operate in the band of visible light and are considered to be of the same type. It is useful to obtain from at least two sensors information that may be complementary, or at least information likely to be such that even if those from a sensor are not exploitable, those from the other sensors are likely to be exploitable (of course, it is not excluded that in certain circumstances, two sensors of different types both perfectly see the scene, or neither of them see it correctly). It is also advantageous to take advantage of the fact that in some cases, some parts of an image from one sensor are usable and others not, whereas with regard to a corresponding image from another sensor, exploitable or non-exploitable parts are different. The process can thus make the best of each sensor, for the same scene at the same time. The use of sensors operating in different spectral bands is advantageous because the spectral bands can be unevenly affected by different meteorological parameters (humidity, temperature, precipitation of snow or rain, fog, hail, etc.). They can also be affected unequally by other parameters such as the time of day, the brightness (possibly artificial), the optical properties of the objects of the scene such as their reflection properties (specular, diffuse or other), their light or dark nature, etc. All kinds of parameters are therefore likely to affect the image capture differently depending on the spectral band. In addition, the image capture can be based where appropriate on different physical phenomena according to the spectral band considered. For example, a sensor in visible frequencies operates essentially in reflective mode (light reflected by objects in the scene), while an infrared sensor can operate more on the radiation emitted by the objects of the scene themselves. At a low color level (for example during sunset, when there is little flow), a color sensor may have difficulty producing relevant information, whereas a sensor in the near infrared can observe certain details without difficulty. On the other hand, when it rains, the infrared sensors can be very negatively affected (by the rain) whereas a sensor in a visible frequency can perfectly distinguish the objects of a scene. Infrared sensors operate at night while a sensor in the visible spectrum does not usually see a nocturnal scene, especially if it is an insensitive sensor provided more for illumination conditions of the type that we usually meet by day. But according to one possible implementation, the two sensors can also be of different types while operating both in the same spectral band (or even in the same spectral subbands of the same spectral band). For example, the two sensors may have very different sensitivities. According to one possible implementation, the first sensor is thus a very sensitive black-and-white sensor (adapted for low-light image captures), presenting, for example, a sensitivity of 12800 ISO (while maintaining a low noise level). , and the second sensor is a much less sensitive color sensor (for example 100 ISO). More generally, two sensors can be considered to be of different types, the ISO sensitivity of one of which is at least ten times greater than that of the other (at equivalent noise level). According to the first embodiment, each second image (of the sequence of second images B (x, y, t)) is divided into elements (for example pixels or groups of pixels) each associated with a corresponding sector of said second picture. The implementations concerning the definition of the elements and sectors of the images of the sequence of first images A (x, y, t) are directly transferable to the elements and sectors of the images of the sequence of second images B (x, y, t ) and are therefore not redescribed.
[0007] Each second image (of the sequence of second images B (x, y, t)) corresponds to a first image (of the sequence of first images A (x, y, t)) respectively. According to one possible implementation, the two sequences contain images of identical resolution sampled at the same frequency. Thus, for each triplet of parameters {xx, yy, tt} representing an abscissa, an ordinate and a sampling instant, the element A (xx, yy, tt) of the sequence of first images corresponds (subject to the where appropriate, the element A (xx, yy, tt) does not belong to a zone that is not common to the two image sequences, such as a peripheral zone of the images in the case, for example, of distinct optics, being understood nevertheless that solutions making it possible to obtain sequences raising this reserve were envisaged above) with the element B (xx, yy, tt) of the sequence of second images (it represents, in a bijective way, the same information , however, measured by a different type of sensor). Independently of the respective resolutions and sampling frequencies of the first and second image sequences, each pair of element and associated sector felt1, sect11 of the second image corresponds to a respective pair of element and associated sector {elt2, sect2} of a corresponding first image. This correspondence is bijective in the case of identical resolution and sampling frequency (by sensor construction or by subsequent processing), at least if the elements and sectors are defined identically for the two sequences. In the opposite case, the correspondence is not bijective and according to one possible implementation the process defines this correspondence accordingly (without necessarily having to modify one of the two sequences to adapt it to the other sequence, nor a fortiori to modify the two sequences).
[0008] For example, without having to carry out processing (degradation of the frequency and / or resolution of a sequence to align it with the other or on the contrary interpolation of the other sequence), the method can associate several elements of an image (of high resolution) of one of the two sequences of a single element of an image (of lower resolution) of the other sequence (spatial dimension) and likewise it can associate several images of one of the two sequences to the same image of the other sequence (time dimension). In the case where the respective frequencies of the two sequences are different but are not multiples of each other, for example if there is during the same time interval k frames of a sequence for p images of the another sequence with k and p integers, k being less than p, p not being a multiple of k, a method according to one possible implementation associates with the image number pp (with pp varying between 0 and the number of images of the fast sequence minus 1) of the fast sequence the image (pp * k) / p (the sign "/" denoting the entire division) of the sequence whose sampling frequency is lower. Similarly, in the case where the respective horizontal (respectively vertical) resolutions of the two sequences are different but are not multiples of each other, for example if there are horizontal pixels (vertical respectively) in an image of a sequence for p horizontal (respectively vertical) pixels in an image of the other sequence with k and p integers, k being less than p, p not being a multiple of k, a method according to one possible implementation associates with a horizontal pixel pp (with pp between 0 and the number of pixels of a row, respectively of a column, minus 1) of an image of the sequence whose images have the greatest horizontal resolution (respectively vertical) a horizontal pixel (respectively vertical) (pp * k) / p (the sign "/" denoting the entire division) of the image of the other sequence (horizontal resolution, respectively vertical, lower). According to one possible implementation of the first embodiment, the method comprises a capture, with the aid of a third sensor of a type different from the types of the first and second sensors (or even using an arbitrary number of sensors additional types of all types two by two and all different from the types of the first and second sensors) of a sequence of third images C (x, y, t) corresponding to said given scene (or even corresponding arbitrary number of image sequences) to said given scene). The implementations described for the second sensor are transposed to the possible additional sensors.
[0009] According to the first embodiment, the method comprises obtaining, by a computing circuit, a first sequence of images a (x, y, t) resulting from the sequence of first images A (x, y, t). . The method also comprises obtaining, by the computing circuit, a second sequence of images b (x, y, t) resulting from the sequence of second images B (x, y, t). The computing circuit may be in particular an electronic circuit of the FPGA, ASIC, PAL, etc. type. The computing circuit may also be an assembly comprising a processor and a memory (for example RAM, ROM, EEPROM, Flash, optical memory, magnetic memory, etc.), the memory storing a computer program arranged to implement this obtaining first and second image sequences when executed by the processor.
[0010] According to one possible implementation, the first sequence of images is simply equal to the sequence of first images. Similarly, according to one possible implementation, the second image sequence is equal to the sequence of second images. In this case, the obtaining of the first and second image sequences can be transparent, that is to say do not require any operation. It can also be carried out by a simple passage of pointer, a pointer to the first (respectively second) sequence of images being defined as being a copy of the pointer to the sequence of first (respectively second) images. Obtaining the first and second image sequences may finally be performed by copying the sequence of first (respectively second) images to another memory area to accommodate the first (respectively second) sequence of images. According to another possible implementation, the calculation circuit applies one (or more) transformation (s) to the sequence of first (respectively second) images in order to obtain the first (respectively second) sequence of images. For example, according to one possible implementation, the first sensor samples the sequence of first images at a different frequency (for example faster) from that at which the second sensor samples the sequence of second images (for example 100 Hz for the first sensor and 50Hz for the second sensor, but of course the reasoning is the same in the situation where it is the second sensor that is faster than the first). One possible implementation then consists in extracting as a first sequence of images a sub-sampling of the sequence of first images taken at the highest frequency (for example, in the above-mentioned case of sampling at 50 Hz and 100 Hz, at consider only one image on two of the images taken at 100Hz). Of course, more elaborate algorithms are possible. For example, according to one possible implementation, an electronic circuit calculates an average of two consecutive images at 100 Hz (of the sequence of first images) and defines it as the corresponding image (at 50 Hz) of the first image sequence, that it associates with a respective image (at 50Hz also) of the second sequence of images. Conversely, according to a possible implementation of the method, a calculation circuit interpolates the images of the lowest sampling frequency sequence (sequence of second images in the example above) for align with the other (higher frequency) image sequence and thus construct a second double frequency image sequence with respect to the second image sequence (this is just one example). Similarly, according to an alternative implementation, the resolution of the first images is different from that of the second images, and a circuit performs an interpolation of the images of the lowest resolution, or a sub-sampling (or a local averaging) of the images of highest resolution. Extremely simplified interpolation consists in duplicating the information, for example if each pixel of an image of one of the two sequences (first and second images) corresponds to four pixels of the corresponding image of the other sequence (Due to a quadruple resolution of this other sequence), an interpolation can consist of replacing each pixel of the low resolution image with a block of four pixels identical to the initial pixel (of the low resolution image). Thus, if the sequence of second images contains images of resolution 2000 * 2000 pixels while the sequence of first images contains images whose resolution is only 1000 * 1000 pixels, the second sequence of images may be equal to the sequence of second images, and the first sequence of images may be a sequence whose images of resolution 2000 * 2000 pixels have the value of coordinate pixels (2x, 2y), (2x + 1, 2y), (2x, 2y + 1) and (2x + 1, 2y + 1) the value of the coordinate pixel (x, y) in the corresponding image of the sequence of first images. According to one possible implementation, the resolution and the sampling frequency of the sequences of first and second images both differ, and the method combines solutions of the two preceding paragraphs to generate a first or a second sequence of images (the another sequence that can possibly be passed by pointer, as explained above, if it is not modified). Of course, it is possible that both the first and second image sequences are generated without either of them simply being copied or passed by reference (for example, in the case of a higher sampling rate of one of the two sequences and higher resolution of the other sequence, if it is desired to align either on the highest performance, or on the lowest). It follows from the previous three paragraphs that it is always possible for the first and second image sequences to contain images of identical resolution sampled at the same frequency (either because of the operating mode of the sensors themselves or because of subsequent treatment). Each pair of element and associated sector of the second image then corresponds to a respective pair of element and associated sector of the corresponding first image (provided that the elements and sectors are defined in the same manner for the first and the last image). second sequence of images). Other problems of harmonization of images can arise. They are not a priori likely to call into question the assertion that it is possible to obtain first and second sequences of images containing images of identical resolution, sampled at the same frequency. These problems of harmonization are regulated conventionally, for example using (in particular) geometric transformations of registration of an image on the other. This can be global translation, focal distortion correction, etc. The obtaining of the first and second image sequences can also (or alternatively) include other types of processing (such as taking into account a local average of a sector, subtracted from the element corresponding to this sector ), as will be explained below. Thus, although obtaining the first and second image sequences may have for object to obtain two sequences (first and second sequences) of identical frequencies and whose images have the same resolution from two sequences (sequences of first and second images) whose frequencies and / or the image resolutions are different, according to some implementations the first and second image sequences have different frequencies and / or image resolutions. Obtaining the first and second image sequences may also (or alternatively) include other types of processing such as contrast and / or brightness adjustments, or a reduction in residual difference of field of view (parallax etc. .) between the two sensors to align (by digital processing) the first image sequence and the second image sequence more precisely than are aligned the first image sequence and the second image sequence. In particular, according to one possible implementation, the obtaining of the first sequence of images depends not only on the sequence of first images but also on the sequence of second images, and / or even on obtaining the second sequence of images depends not only on the sequence of second images but also on the sequence of first images. Of course, according to one possible implementation, the obtaining of the first sequence of images may depend only on the sequence of first images and not on the sequence of second images, and / or even on obtaining the second sequence of images may depend only on the sequence of second images and not on the sequence of first images. Where appropriate, the method comprises obtaining, by a calculation circuit, a third sequence of images resulting from the sequence of third images C (x, y, t) and possibly additional sequences (fourth, fifth, etc. .) images from the sequence of additional images, in a manner similar to those described above. According to one possible implementation of the first embodiment, the method comprises obtaining, by a calculation circuit, for each sector of each of the images of the first and second image sequences (and where appropriate, also for each sector of each of the images of the third image sequence and additional image sequences) of a spatial variance of said sector.
[0011] The computing circuit can pool resources (for example the processor) with the aforementioned calculation, or be a completely separate calculation circuit. The calculation circuit may be in particular an electronic circuit of the FPGA, ASIC, PAL type, etc. The calculation circuit may also be an assembly comprising a processor and a memory (for example RAM, ROM, EEPROM, Flash, optical memory, magnetic memory, etc.), the memory comprising a computer program arranged to implement this obtaining of spatial variance when it is executed by the processor. The spatial variance is a quantity representative of the magnitude of the variations of the values of the elements (for example pixels) in the sector concerned. The spatial variance of the sector can be calculated from a mathematical variance of the pixels composing the sector. The pixels taking the values a (x, y, t) (x and y varying in intervals defining a given sector of the image of the instant t of the first sequence of images), the variance is then VA (x , y, t) = E ((a (x, y, t) -E (a (x, y, t))) ^ 2) where E denotes the expected expectation, calculated for all x and y values given sector (at time t). Similarly, for the second sequence of images, VB (x, y, t) = E ((b (x, y, t) -E (b (x, y, t))) ^ 2), and the formula is the same for other possible image sequences. According to one possible implementation, the spatial variance is then defined as a function of the noise of the sensors in the following manner. We note the standard deviation of the noise (supposed Gaussian white) of the first sensor, and sb the standard deviation of the noise (supposed white Gaussian) of the second sensor (one could similarly define notations for the third possible sensor and for the additional sensors). These noises are characteristics of the sensors. Generally, these characteristics are stable (what varies is the signal-to-noise ratio, more than the noise itself). We define the spatial variance VAr (x, y, t) = 1 + VA (x, y, t) / (sa ^ 2) for the first image sequence and VBr (x, y, t) = 1 + VB (x, y, t) / (sIDA2) for the second sequence of images (likewise for the other possible sequences). For example, if a sector n of the second sequence of images is between x = xn and x = xn + dxn and between y = yn and y = yn + dyn, then the spatial variance of this sector n is equal to 1 + E ((b (x, y, t) -E (b (x, y, t))) ^ 2) / (sIDA2), the expectations being calculated for x varying from xn to xn + dxn and for y varying from yn to yn + dyn. According to other implementations, the spatial variance is calculated differently. Indeed, any indicator likely to reflect the fact that the elements vary a lot or on the contrary vary little may be useful. An object of such an indicator (the spatial variance) is to highlight the "good quality" sectors (with strong spatial variations) in order to give them more weight than the corresponding sectors of the other image sequences, if these corresponding sectors (that is, corresponding to the same part of the scene) have a lower quality. A poorer quality sector generally has fewer spatial variations, suggesting that the sensor involved was not able to discern the sector details as sharply as the other sensor (which identifies greater variations in a corresponding sector). According to the first embodiment, the method comprises obtaining, by a calculation circuit, for each sector of each of the images of the first and second image sequences (and possibly third sequences or even additional sequences) of an associated weighting. (to this sector). This weighting is intended to reflect the signal-to-noise level in this area. According to one possible implementation, this weighting is a function of a spatial variance calculated for this sector (for example according to the implementation envisaged above). The weighting associated with the sector may also be a function of the spatial variance (respectively spatial variances) of the sector (respectively sectors) corresponding to the image of the other (or other) sequence (s). In one possible implementation, sector weighting is an increasing function of the spatial variance of the sector.
[0012] Assuming that there are only two image sequences, a possible weighting is, for a sector of an image of the first image sequence, Pond (a (x, y, t)) = VAR (x, y, t) / ((VAr (x, y, t) + VBr (x, y, t)), and for the corresponding sector of the corresponding image of the second image sequence, Pond (b (x, y, t)) = VBr (x, y, t) / (VAr (x, y, t) + VBr (x, y, t)) Assuming, with n type sensors different, the process uses n sequences of images from n sensors (provided that it is not mandatory to use the image sequences from all available sensors), the method uses (according to one implementation possible) for the calculation of the weights a similar formula in which the denominator includes the sum of all spatial variances for the sector under consideration (ie the spatial variances calculated for the sector as captured by each For example, for three sensors, it is possible to finish for a sector of the second sequence: Pond (b (x, y, t)) = VBr (x, y, t) / (VAr (x, y, t) + VBr (x, y, t) + VCr (x, y, t)). This method of calculation has the advantage of standardizing the quality measure for each sector. More generally, any formula showing a relative spatial variance of the sector considered in relation to the spatial variances of all the sensors can be used. An object of this weighting is to highlight the sector whose spatial variance is the most important, and thus which a priori allows the most effective motion detection. The computing circuit can pool resources (for example the processor) with one or more of the aforementioned calculation circuits, or be a completely separate calculation circuit. The calculation circuit may be in particular an electronic circuit of the FPGA, ASIC, PAL type, etc. The calculation circuit may also be an assembly comprising a processor and a memory (for example RAM, ROM, EEPROM, Flash, optical memory, magnetic memory, etc.), the memory comprising a computer program arranged to implement this weighting gain when executed by the processor. According to the first embodiment, the method comprises obtaining, by a calculation circuit, a first weighted sequence of images. Each element of each image (of the first weighted sequence of images) is equal to the corresponding element (i.e., of the same spatiotemporal location) of the first sequence of images, weighted by the weighting associated with the sector. (of the first sequence of images) associated with said corresponding element (of the first sequence of images). For example, each element ap (x, y, t) of each image of the first weighted sequence of images is equal to the product of the corresponding element a (x, y, t) of the first image sequence by the weighting p (x, y, t) associated with the sector corresponding to said element of the first image sequence (ap (x, y, t) = a (x, y, t) * p (x, y, t)) . Thus, according to one possible implementation, the first weighted sequence of images is equal to the first sequence of images in which each element (for example each pixel) of each image is replaced by the same element (heard for example as the digital value of the pixel representing this pixel in the image, when this element is a pixel) multiplied by the weighting calculated for the sector associated with this element. The method also includes obtaining, by the computing circuit, a second weighted sequence of images. Each element of each image (of the second weighted sequence of images) is equal to the corresponding element (i.e., of the same spatiotemporal location) of the second image sequence, weighted by the weighting associated with the sector. (Of the second sequence of images) associated with said corresponding element (of the second sequence of images), for example according to an implementation of the preceding paragraph transposed to the second sequence of images. Where appropriate, the method also comprises obtaining, by the computing circuit, at least as many image-weighted sequences as there are different sensors (provided that the presence of more than two sensors does not not calculate these weighted sequences for all sensors, but is only an option). The computing circuit can pool resources (for example the processor) with one or more of the aforementioned calculation circuits, or be a completely separate calculation circuit. The calculation circuit may be in particular an electronic circuit of the FPGA, ASIC, PAL type, etc. The calculation circuit may also be an assembly comprising a processor and a memory (for example RAM, ROM, EEPROM, Flash, optical memory, magnetic memory, etc.), the memory comprising a computer program arranged to implement this obtaining of weighted sequences when it is executed by the processor. According to the first embodiment, the method comprises obtaining, by a computer circuit, an improved image sequence 1 (x, y, t) (for example, images of improved variances) resulting from a a combination of image sequences comprising the first weighted sequence of images and the second weighted sequence of images. This combination can be a linear combination, for example a simple sum of the weighted sequences of images, ie a sequence of images so each element of each image is equal to the sum of the corresponding elements of the corresponding images of the images. weighted sequences to combine. According to one possible implementation, the method combines more than two weighted sequences (it can combine as many weighted sequences as there are sensors of different types two by two, provided that a weighted sequence has been calculated for each of these sensors). But according to one possible implementation, the method combines only first and second weighted sequences (regardless of the number of sensors). The computing circuit can pool resources (for example the processor) with one or more of the aforementioned calculation circuits, or be a completely separate calculation circuit. The calculation circuit may be in particular an electronic circuit of the FPGA, ASIC, PAL type, etc. The calculation circuit may also be an assembly comprising a processor and a memory (for example RAM, ROM, EEPROM, Flash, optical memory, magnetic memory, etc.), the memory comprising a computer program arranged to implement this obtaining improved image sequence of variances when executed by the processor.
[0013] According to the first embodiment, the method comprises obtaining, by a calculation circuit, a motion estimation on the basis of the improved image sequence 1 (x, y, t) obtained. This motion estimation can be for example an overall motion estimation, a dense motion estimation, or a combination of global motion estimation and dense motion estimation, as described in the French patent application FR1050014 of the same applicant. According to one possible implementation, this motion estimation is assisted by information external to that obtained by the sensors, such as GPS measurements and / or gyroscopic measurements and / or accelerometer measurements. The computing circuit can pool resources (for example the processor) with one or more of the aforementioned calculation circuits, or be a completely separate calculation circuit. The calculation circuit may be in particular an electronic circuit of the FPGA, ASIC, PAL type, etc. The computing circuit may also be an assembly comprising a processor and a memory (for example RAM, ROM, EEPROM, Flash, optical memory , magnetic memory, etc.), the memory comprising a computer program arranged to implement this motion estimation obtaining when it is executed by the processor. According to the first embodiment, the method comprises obtaining, by a calculation circuit, as a function of the motion estimation obtained, a spatial alignment of the images of a sequence of images to be displayed resulting from sequences of images corresponding to the given scene and comprising the sequence of first images A (x, y, t) and the sequence of second images B (x, y, t). The sequences of images corresponding to the given scene may comprise sequences of images other than the sequence of first images A (x, y, t) and the sequence of second images B (x, y, t), and may include the first sequence of images a (x, y, t), the second sequence of images b (x, y, t), or a sequence of images resulting from a treatment of one of these four sequences of images. It may also be sequences of images from sensors other than the first and second sensors (for example a third sensor, or another additional sensor), possibly processed. The characteristics of a sequence of images to be displayed are not necessarily optimized in the same way as another type of image sequence. For example, the sequence of images of improved variances can be determined so as to maximize the spatial variances of the sectors of a scene captured by several sensors and to facilitate motion detection by an appropriate algorithm, but this sequence of images is not necessarily easy to understand for a human user to whom it would be displayed. It should be noted that the sequence of images to be displayed may be subject, before display but after denoising, other processing (eg an additional information incrustation such as date, time, GPS coordinates, the outside temperature, or contextual information of augmented reality etc.). The spatial alignment of the images of a sequence of images to be displayed may consist in calculating, for each image of the sequence, a vector indicating the overall movement of the image relative to the previous image. A vector (X, Y) can thus indicate the horizontal offset X and the vertical offset Y to be applied to an image so that the scene that this image represents coincides with the same scene as captured in a reference image such as than the previous image. By applying the translation (X, Y) to this image, it is then possible to superimpose the two images (at least their common parts). For example, for two images of 1000 * 1000 pixels, the second of which should be considered to be shifted 23 pixels to the left and 17 pixels upwards to reflect the movement, the process can then superimpose the pixels (xi, yi) of the first image with the pixels (xi + 23, yi + 17) of the second image, xi varying from 0 to 976 and yi varying from 0 to 982 (the intersection zone measuring 977 * 983 pixels). According to one possible implementation, the alignment is calculated in a sliding manner on more than two images (for example on ten images, but of course other values are possible), or more precisely on the intersection zone of these ten (or n for any integer n suitable) images to align. In this case, according to one possible implementation, each image of the sequence is associated with nine (or in the general case n1) possible alignments (depending on the position that the image is likely to occupy in the sequence of images to align). According to an alternative implementation, only the alignment of an image with respect to an adjacent image (for example the previous image) is recorded, the method recalculating (where appropriate) the alignment of an image with respect to an image that is not immediately adjacent to it by composition of the relative alignments of all the intermediate images. A simple translation in the plane of the image can be sufficient, for example for sensors embedded in a vehicle that moves in translation on a substantially horizontal plane (for example a road, a surface of water, or a ground) and which observes a scene relatively far from the sensors. But a simple translation does not necessarily in all circumstances reflect the measured movement and therefore does not necessarily allow proper alignment (for example in the case of a rotation along an axis perpendicular to the plane of the image). According to other implementations, the calculation circuit therefore determines a rotation in the plane of the image (for example by determining a point constituting a center of rotation and a rotation angle). As before, each image can be associated with a recording of the characteristics (center point and angle) of a rotation to be applied to this image to align it with a reference image such as the previous image, or with as many records as images that we intend to align with the image considered. It is necessary to distinguish the movement related to the displacement of the optronic device in the course of time and the "movement" of harmonization of a sensor with the other. The spatial alignment may thus include static (i.e., non-time varying) harmonization of one sensor with the other. The static harmonization comprises, for example, a rotation that makes it possible to harmonize one sensor with the other (the rotation depends on the sensors and their relative positioning and does not change over time if the relative positioning of the sensors is fixed). Static harmonization can likewise include a distortion correction (constant) or a translation (fixed) to be applied to "squint" correctly at a given distance. According to one possible implementation, the static harmonization is determined once and for all in the factory. The two previous embodiments can be combined, and thus the alignment to be operated between two consecutive images can be represented by a composition of a translation and a rotation, both in the plane of the image.
[0014] The aforementioned two-dimensional transformations may be an excellent approximation but are not always sufficient to perform an almost perfect alignment between two or more images. According to other embodiments the circuit refine the alignment and can thus also take into account a movement other than a movement in the plane of the image. With such a movement, the considered image of the sequence of images to be displayed is then aligned not only by a simple displacement of the image (which would remain the same as the displacement), but by deformation of the image reflecting the measured motion. . By "deformation" is meant that the "deformed" image is no longer superimposable with the original image, regardless of the rotations and / or translations applied in the plane of the image to attempt to operate this superposition. . Such a movement may be, for example, a translation corresponding to a movement towards or away from the sensors with respect to the scene along an axis perpendicular to the plane of the image. The alignment can then include the application of a zoom factor (front or rear), each image being associated with a zoom factor to be applied with respect to at least one reference image (for example the previous image). Finally, such a movement may also include rotations along axes not perpendicular to the plane of the image (for example along a horizontal axis of the image and along a vertical axis of the image). Such rotations can be simulated by image processing. Such a rotation (along an axis not perpendicular to the axis of the image) may result from a rotation of the sensors with respect to the scene, which rotation may be consecutive to a rotation of their support (for example a vehicle traveling in a very rough terrain or a ship over a strong sea), such a rotation can be measured for example by gyroscopes solidarity sensors to facilitate its estimation. The alignment then comprises (in addition possibly to the other alignment parameters already mentioned) parameters of possible three-dimensional rotations (involving deformation of the image). According to one possible implementation, the part of the scene that a human user of the process wishes to observe corresponds to the center of the images (or expressed differently, the human user centers the sensors on the part of the scene which interests him), and this part of the scene is far from the sensors (i.e. the speed of movement of the sensors with respect to the scene is not likely to significantly change the distance of the sensors to the scene on a scale short time, for example of the order of ten consecutive images). Displacements occurring elsewhere than in the plane of the image can then be assimilated to translations in the plane of the image (with regard to the two three-dimensional rotations envisaged) or neglected (with regard to the approximation or the distance of the scene along a perpendicular axis), at least as regards the center of the images (which according to this implementation represents the most relevant part). The computing circuit can pool resources (for example the processor) with one or more of the aforementioned calculation circuits, or be a completely separate calculation circuit. The calculation circuit may be in particular an electronic circuit of the FPGA, ASIC, PAL type, etc. The calculation circuit may also be an assembly comprising a processor and a memory (for example RAM, ROM, EEPROM, Flash, optical memory, magnetic memory, etc.), the memory comprising a computer program arranged to implement this obtaining spatial alignment (images of a sequence of images to display) when it is executed by the processor.
[0015] According to the first embodiment, the method comprises temporal denoising, by a calculation circuit, as a function of the determined spatial alignment, of the sequence of images to be displayed. For example, if the alignment consists of a translation (X, Y) to be applied to a first image with respect to another adjacent image in the sequence of images to be displayed, then it is possible to superimpose the two images on the image. intersection of these two images by a simple set of indexes (without having to perform image processing) and perform a denoising (eg an average) generating much less blur (ideally, by generating no blur if the scene is stationary, the movement being for example derived from the movement of the sensor (s)). Thus, in the aforementioned example in which two images of 1000 * 1000 pixels are such that the second should be considered to be shifted 23 pixels to the left and 17 pixels up to reflect the movement, the method can calculate the average pixels (xi, yi) of the first image with the pixels (xi + 23, yi + 17) of the second image, xi ranging from 0 to 976 and yi varying from 0 to 982 (the intersection zone measuring 977 * 983 pixels). This average makes it possible to reduce the noise on these pixels. This is obviously a simplistic example for purposes of illustration, the average being rather calculated on a larger number of elements (for example pixels) aligned with each other, and the simple average (by example the arithmetic mean) which can be replaced by a more elaborate denoising algorithm. Possible time-domain denoising techniques include linear low-pass filtering techniques (temporal averaging, linear filters, etc.), robust time averaging (for example using M-estimators), and also nonlinear techniques such as than the temporal median or parsimonious methods (filtering coefficients of wavelets, ...). The computing circuit can pool resources (for example the processor) with one or more of the aforementioned calculation circuits, or be a completely separate calculation circuit. The calculation circuit may be in particular an electronic circuit of the FPGA, ASIC, PAL type, etc. The calculation circuit may also be an assembly comprising a processor and a memory (for example RAM, ROM, EEPROM, Flash, optical memory, magnetic memory, etc.), the memory comprising a computer program arranged to implement this temporal denoising when it is executed by the processor.
[0016] According to a second embodiment, a temporal denoising method of an image sequence according to the first embodiment comprises a selection, by a selection circuit, as a sequence of images to display which is denoised to the step / j /, of one of the sequence of first images A (x, y, t) and the sequence of second images B (x, y, t). Thus, according to one possible implementation, the method detects a pressure on a button that allows a human user to switch between the sequence of first images (for example images taken with a sensor in the visible spectrum by the human) and the sequence of second images (for example images taken in the infrared spectrum). According to one possible implementation, the method automatically selects the sequence of images to be displayed (for example an infrared image sequence if it is dark and a sequence of color images if it is daytime).
[0017] According to one possible implementation, the method performs the selection based on configuration information or selection information received for example via a telecommunications network. According to a third embodiment, a temporal denoising method according to the first or second embodiment comprises, for the purposes of obtaining, by a calculation circuit, a first sequence of images a (x, y, t ) from the sequence of first images A (x, y, t) and obtaining, by the computing circuit, a second sequence of images b (x, y, t) from the sequence of second B (x, y, t) images, the following steps.
[0018] This method comprises obtaining, by the calculation circuit, for each sector of each of the images of the sequence of first images A (x, y, t) and the sequence of second images B (x, y, t), d a local average of said sector. This local average is for example an arithmetic mean such as the ratio of the sum of the values of the pixels of the sector by the number of pixels of the sector. But other means are possible (geometric mean, etc.). Obtaining the first sequence of images a (x, y, t) comprises the subtraction (by the calculation circuit) of each element of each image of the sequence of the first images A (x, y, t) of the local average of the sector corresponding to said element, and obtaining the second image sequence b (x, y, t) comprises subtracting (by the computing circuit) to each element of each image of the sequence of the second images B (x, y, t) of the local average of the sector corresponding to said element. Thus, the variations of the values of the pixels of the sectors are increased relative (since they are centered on the origin instead of being centered on a value which can be large in front of these variations). This could completely distort the perception of the image by a human eye, but allows (for a computing circuit, which is an electronic component), to bring out the variations and to identify which of the two sensors that provides for the sector considered information allowing the best motion estimation. Of course, other treatments are possible in addition to the subtraction of the local average, as was previously described. After subtracting the local average, it is thus possible, for example, to sub-sample the first sequence of images a (x, y, t) if it has a lower resolution than that of the second sequence of images b (x , y, t), etc. According to a fourth embodiment, a temporal denoising method according to one of the first to third embodiments is based on a first sensor and a second non-spatially harmonized sensor. The temporal denoising method comprises, for the purpose of obtaining, by a calculation circuit, a first sequence of images a (x, y, t) resulting from the sequence of first images A (x, y, t ) and obtaining, by the calculation circuit, a second sequence of images b (x, y, t) from the sequence of second images B (x, y, t), the following steps.
[0019] The method comprises obtaining, by a computing circuit, a first sequence of images a (x, y, t) resulting from a pre-smoothing of the sequence of first images A (x, y, t) and a second sequence of images b (x, y, t) resulting from a pre-smoothing of the sequence of second images (B (x, y, t)). Thus, thanks to this fourth embodiment, the method operates even in the presence of uncorrected parallax between the two sensors, provided however that the offset between the images of the two sensors is small compared to the introduced blur. More precisely, in the case of a heterogeneous bicaptor system (two sensors of different types), in particular a heterogeneous bicaptor system using separate optics for each sensor, the precise harmonization of the two sensors (that is to say the that for any given triplet {xo, yo, to}, the point A (xo, yo, to) and the point B (xo, yo, to) correspond to the same point in the imaged scene at the same time on both sensors) can be very complex or, in some cases, not feasible. Indeed, if the projected scene on the imagers has several depths, the vector of spatial harmonization between the two sensors is dense and must be estimated by complex techniques: heterogeneous dense motion estimation, digital model correction of terrain, use of active imaging, etc. In the hypothesis of an imprecise spatial harmonization, the calculation of 1 (x, y, t) from A (x, y, t) and B (x, y, t) previously described is spatially inaccurate since the points of A (x, y, t) and B (x, y, t) do not exactly coincide. On the other hand, by applying a spatial pre-smoothing to the images A (x, y, t) and B (x, y, t), and by choosing a smoothing spatial scale such that the expected maximal harmonization error is negligible before this scale, the process obtains an image 1 (x, y, t) containing reliable information at the spatial grain of the pre-smoothing.
[0020] Under these conditions, the method uses, for example, a differential motion measurement technique (over 1 (x, y, t)), such a technique being based on a constraint equation of the apparent motion, and precisely requiring (according to a possible implementation) an image pre-smoothing making it possible to make the variations of light intensity slow with respect to the amplitude of the movement to be measured. According to the fourth embodiment, the harmonization inaccuracies become completely transparent, and do not impair the ability to merge the information of the sensors to restore the video.
[0021] According to a fifth embodiment, a computer program comprises a sequence of instructions which, when executed by a processor, cause the processor to implement a temporal denoising method according to one of the first to the fourth modes. of realization.
[0022] The computer program can be written in any appropriate programming language such as very low level languages (assembler type) or C language, for example. According to a sixth embodiment, a computer-readable non-transit storage medium stores a computer program according to the fifth embodiment. This storage medium is for example a ROM-type memory, EEPROM, Flash, battery-backed RAM, optical memory or magnetic memory.
[0023] A seventh embodiment relates to an optoelectronic device for temporal denoising of a sequence of images. This optronic device can be for example a pair of binoculars, or a vision system mounted on a vehicle, for example on a tank turret.
[0024] The optronic device is arranged to implement the method according to one of the first to the fourth embodiments. All the implementations described in connection with the method are transposed to the device (and vice versa). According to one possible implementation, the optronic device comprises a storage medium according to the sixth embodiment, which, according to possible implementations, is integrated in a calculation circuit (either in the form of discrete electronic components, or on a same electronic component, such as a microcontroller or a DSP). According to one possible implementation, the optronic device comprises calculation circuits implemented in a form comprising a single processor (possibly multi-core), a non-volatile memory (for example ROM, EEPROM, Flash, battery-backed RAM, optical memory or memory magnetic) storing a set of computer programs, each adapted to perform a particular task (and each corresponding to a respective computation circuit), and a working memory (for example RAM type). The optronic device comprises a first sensor arranged to capture a sequence of first images A (x, y, t) corresponding to a given scene, each first image being divided into elements each associated with a corresponding sector of said first image.
[0025] The optronic device comprises a second sensor, of a type different from the type of the first sensor, arranged to capture a sequence of second images B (x, y, t) corresponding to said given scene, each second image corresponding to a first image, each second image being divided into elements each associated with a corresponding sector of said second image, each pair of element and associated sector of the second image corresponding to a pair of element and associated sector of the corresponding first image. The optronic device comprises a calculation circuit arranged to obtain a first sequence of images a (x, y, t) resulting from the sequence of first images A (x, y, t) and a second sequence of images b (x , y, t) derived from the sequence of second images B (x, y, t).
[0026] The optronic device comprises, according to one possible implementation, a calculation circuit arranged to obtain, for each sector of each of the images of the first and second image sequences, a spatial variance of said sector.
[0027] The optronic device comprises a calculation circuit arranged to obtain, for each sector of each of the images of the first and second image sequences, an associated weighting (as the case may be depending on the spatial variance obtained for this sector). The optoelectronic device comprises a calculation circuit arranged to obtain a first weighted sequence of images, each element of each image being equal to the corresponding element of the first image sequence weighted by the weighting associated with the sector corresponding to the corresponding element. (of the first sequence of images) and a second weighted sequence of images, each element of each image of which is equal to the corresponding element of the second image sequence weighted by the weighting associated with the sector corresponding to said corresponding element ( of the second sequence of images). The optoelectronic device comprises a calculating circuit arranged to obtain an improved image sequence 1 (x, y, t) resulting from a combination of image sequences comprising the first weighted sequence of images and the second weighted sequence of images. images. The optoelectronic device comprises a calculating circuit arranged to obtain a motion estimation on the basis of the improved image sequence 1 (x, y, t) obtained. The optoelectronic device comprises a calculation circuit arranged to obtain, according to the calculated motion estimation, a spatial alignment of the images of a sequence of images to be displayed resulting from image sequences corresponding to the given scene and comprising the sequence of first images A (x, y, t) and the sequence of second images B (x, y, t). The optoelectronic device comprises a calculating circuit arranged to effect a temporal denoising, as a function of the determined spatial alignment, of the sequence of images to be displayed. According to an eighth embodiment, an optoelectronic device according to the seventh embodiment comprises a selection circuit arranged to select, as a sequence of images to be displayed which must be de-noised, one of the sequence of first images A ( x, y, t) and the sequence of second images B (x, y, t). According to a ninth embodiment, the computing circuit arranged to obtain the first and second image sequences of an optoelectronic device according to the seventh or eighth embodiment is arranged to calculate, for each sector of each of the images of the sequence first images A (x, y, t) and the sequence of second images B (x, y, t), a local average of said sector, then to obtain the first sequence of images a (x, y, t) subtracting from each element of each image of the sequence of the first images A (x, y, t) of the local average of the sector corresponding to said element, and to obtain the sequence of second images b (x, y, t) by subtraction each element of each image of the sequence of the second images B (x, y, t) of the local average of the sector corresponding to said element. According to a tenth embodiment, the first sensor and the second sensor of an optoelectronic device according to one of the seventh to the ninth embodiments are not spatially harmonized. The computing circuit arranged to obtain the first and second image sequences is arranged to obtain a first sequence of images a (x, y, t) resulting from a pre-smoothing of the sequence of first images A (x, y, t) and a second sequence of images b (x, y, t) resulting from a pre-smoothing of the sequence of second images B (x, y, t). FIG. 1 illustrates an exemplary device optronics according to a possible embodiment of the invention. This optronic device is a spatially and temporally harmonized SYS bicapteur system. The SYS bicaptor system comprises a single optics (not shown) through which an SCN signal representing the observed scene is received. The SCN signal arrives on a beam splitter BS, which may be a semi-reflecting mirror, a splitter plate, or any other appropriate material (for example a prism-based one). The beam splitter BS separates the signal SCN in two identical copies (with the light intensity close), destined for a first sensor SENS_A and a second sensor SENS_B, which respectively obtain information A (x, y, t ) and B (x, y, t) representing image sequences representing the SCN signal. The sensors SENS_A and SENS_B transmit (dotted lines in FIG. 1) the information A (x, y, t) and B (x, y, t) to a calculation circuit C_CIRC.
[0028] The calculation circuit C_CIRC of the SYS bicaptor system combines the information contained in A (x, y, t) and B (x, y, t) (obtained by the two sensors) in order to carry out a denoising of A (x, y, t) and / or B (x, y, t). The images of the two sensors are spatially and temporally harmonized, ie a point of A (x, y, t) and a point of B (x, y, t) correspond to the same point of the scene , imaged at the same time on both sensors. The spatial harmonization results notably from the single optics, and the adjustment of the relative positioning of the beam splitter BS and the first and second sensors SENS_A and SENS_B. The temporal harmonization results for example from the temporal synchronization of the sensors SENS_A and SENS_B. This synchronization time (simultaneous sampling by the two sensors) can be obtained for example using a single clock common to both sensors, or a regular synchronization of the respective clocks of the two sensors, or a quality of two clocks to ensure that their relative drift is negligible.
[0029] The spatial and temporal harmonization of the sensors does not necessarily imply identical spatial and temporal resolutions for the two sensors. But according to one possible implementation these spatial resolutions (for example in DPI that is to say in number of pixels per inch along the axis of the x and along the y axis) and temporal (for example in Hz) are the same. The noise model of the sensors is known. The parameters of the noise model of the two sensors are either known or calculated using estimation techniques mastered by those skilled in the art. More precisely, in the example envisaged, the noises are supposed white Gaussian, and of standard deviations respective sa and sb for the sensors SENS_A and SENS_B.
[0030] FIG. 2 illustrates various steps implemented by the aforementioned SYS bi-sensor system. The SYS bicaptor system is arranged to compute an image 1 (x, y, t) called the "best information image" corresponding to an image mixing the information of the two channels (i.e. the information A ( x, y, t) and B (x, y, t) respectively from the sensors SENS_A and SENS_B). This image is then used for motion estimation. The image 1 (x, y, t) takes the general form: 1 (x, y, t) = Pond (A (x, y, t)) * Transfo (A (x, y, t)) + Pond (B (x, y, t)) * Transfo (B (x, y, t)) Pond () is a weighting function that depends on its argument, and Transfo () is a transform function of the gray levels of his argument. If one of the two images A (x, y, t) or B (x, y, t) (or both) is not "monochrome" but has several channels, such as a red-color image. blue-green, then this image is reduced to a monochrome image (that is to say, single channel). For example, each RGB element (composed of three red, green and blue pixels) of the color image is associated with a "black and white" element of a monochrome image (determined for the occasion) by calculating, for each element of the color image, the luminance corresponding to this element of the color image, for example from a weighted sum of the three channels. This luminance then represents the corresponding element of the monochrome image (it is for example encoded on one byte in order to represent 256 possible levels of luminance). The calculation example of 1 (x, y, t) below consists of calculating an image of best representative local spatial variance from the images A (x, y, t) and B (x, y, t).
[0031] To compute 1 (x, y, t), the calculation circuit C_CIRC of the SYS bicaptor system begins by calculating the local spatial averages and variances of each of the two images. These are calculated in a sliding manner on all the pixels. The calculation circuit C_CIRC thus obtains images of local spatial averages and variances, denoted MA (x, y, t) and MB (x, y, t) for the averages, and VA (x, y, t) and VB ( x, y, t) for the variances. The spatial extent of the neighborhood on which the local means and variances (window size) are calculated is indexed to the spatial extent of the motion measurement process that is ultimately used. Local averages and variances can be calculated in a multi-scale manner if the motion measurement technique itself is multi-scale. The mono-scale case is described below. The calculation circuit C_CIRC then calculates normalized images NA (x, y, t) and NB (x, y, t) by: NA (x, y, t) = A (x, y, t) -MA (x , y, t) NB (x, y, t) = B (x, y, t) -MB (x, y, t) These normalized images have a zero spatial mean. Calculation circuit C_CIRC then calculates the local representative spatial variances VAr (x, y, t) and VBr (x, y, t), which relate the activity in the images with regard to their respective noises: VAr (x, y , t) = 1 + VA (x, y, t) / saA2 VBr (x, y, t) = 1 + VB (x, y, t) / sIDA2 The images thus calculated (representing representative local spatial variances) tend to to 1 when the local spatial variance tends to zero. Conversely, these images take important values when the local activity of the images is large in front of their noise.
[0032] The calculation circuit C_CIRC finally calculates the image of best representative local spatial variance l (x, y, t) by: 1 (x, y, t) = WA (x, y, t) + WB (x, y, t) With: WA (x, y, t) = Pond (A (x, y, t)) * Transfo (A (x, y, t)) WB (x, y, t) = Pond (B (x , y, t)) * Transfo (B (x, y, t)) Pond (A (x, y, t)) = VAr (x, y, t) / (VAr (x, y, t) + VBr (x, y, t)) Pond (B (x, y, t)) = VBr (x, y, t) / (VAr (x, y, t) + VBr (x, y, t)) Transfo ( A (x, y, t)) = NA (x, y, t) Transfo (B (x, y, t)) = NB (x, y, t) This image 1 (x, y, t) constitutes a merged image of the images A (x, y, t) and B (x, y, t), with locally a stronger weighting for the image which has a strong local variance with respect to its own noise. In particular, if for example at the point (x, y, t) the quantity VAr (x, y, t) is preponderant with respect to the quantity VBr (x, y, t), then 1 (x, y, t) will be roughly equivalent to (i.e. close to) NA (x, y, t). If the quantities VAr (x, y, t) and VBr (x, y, t) are equivalent (i.e., close), then the images NA (x, y, t) and NB (x, y , t) are weighted equally. The calculation circuit C_CIRC then uses this image 1 (x, y, t) in the analysis phase of the denoising process. This analysis phase comprises a spatial alignment of the images to be denoised (here it is not a question of spatially aligning two images respectively coming from the first sensor and the second sensor, but to align spatially consecutive images coming from the same sensor). In the example of Figure 2, it is the image of the first sensor (represented by A (x, y, t)) is denoised. In order to achieve this spatial alignment, the calculation circuit C_CIRC performs an apparent measurement of movement in the image. It performs this measurement on the images 1 (x, y, t), t varying in a suitable window. Indeed, 1 (x, y, t) is of better quality than A (x, y, t) (or at least, in the most unfavorable case, of quality at least as good). The calculation circuit C_CIRC thus obtains a global alignment signal AL (t) (according to a more complex implementation not shown in FIG. 2, the alignment signal is dense and depends on the three parameters x, y and t at instead of depending only on the parameter t). This obtaining is based on an access to the image 1 (x, y, t-1), which can be done via a buffer memory (DELAY block in FIG. 2) storing the last image 1 (x, y, t) which at the next instant t corresponds to the image 1 (x, y, t-1). The alignment signal AL (t) indicates the displacement to be made on the image 1 (x, y, t) so as to align it with the image 1 (x, y, t-1) (and therefore the displacement at perform on the image A (x, y, t) to align it with the image A (x, y, t-1), which is the same displacement). Once this spatial alignment is performed on the signal A (x, y, t), the calculation circuit C_CIRC obtains a signal AL_A (x, y, t), which is an input signal A (x, y, t) aligned with on the input signal at the previous instant A (x, y, t-1). Alternatively, the signal that is aligned is the delayed signal A (x, y, t 1) (aligned with A (x, y, t), instead of aligning A (x, y, t) on A ( x, y, t 1)). The calculation circuit C_CIRC then calculates a "clean" signal (denoised) CL_A (x, y, t) by regularization of the aligned input AL_A (x, y, t) and of the previous input A (x, y, t-1). The mechanism that makes it possible to obtain the previous entry (DELAY block in FIG. 2) is for example a buffer memory. The unmodified input that is used as an input parameter in the calculation of CL_A (x, y, t) (in conjunction with the aligned input) is indeed a delayed input. In general, the time regularization step is done on A (x, y, t) or B (x, y, t) (not necessarily on A (x, y, t) as illustrated in FIG. 2), and can operate on more than two consecutive images. The regularization is oriented by the result of the motion estimation carried out on the image 1 (x, y, t). With this method, motion estimation is thus optimized, taking advantage of the best combination of sensors (for example based on different spectral information depending on the sensor). The method therefore makes it possible to improve the denoising on the image A (x, y, t) even if it has a very bad signal-to-noise ratio, provided that it can be compensated with the image B (x, y, t), and vice versa. The weighting function of the images A (x, y, t) and B (x, y, t) to form the image 1 (x, y, t) can also be any weighting function making it possible to generate a relevant merged image. in the sense of the local signal-to-noise ratio (and not necessarily the weighting function described above). Of course, the present invention is not limited to the embodiments described above as examples; it extends to other variants.
权利要求:
Claims (10)
[0001]
REVENDICATIONS1. A method of temporal denoising of a sequence of images, said method comprising: / a / a capture, using a first sensor (SENS_A), of a sequence of first images (A (x, y, t )) corresponding to a given scene (SCN), each first image being divided into elements each associated with a corresponding sector of said first image, / b / a capture, using a second sensor (SENS_B) of different type of the type of the first sensor, a sequence of second images (B (x, y, t)) corresponding to said given scene (SCN), each second image corresponding to a first image, each second image being divided into associated elements each to a corresponding sector of said second image, each pair of element and associated sector of the second image corresponding to a pair of element and associated sector of the corresponding first image, / c / obtaining, by a calculation circuit (C_CIRC), of a first sequence of images from the sequence of first images (A (x, y, t)) and a second sequence of images from the sequence of second images (B (x, y, t)), / d / obtaining, by a calculation circuit (C_CIRC), for each sector of each of the images of the first and second image sequences of an associated weighting, 25 / e / obtaining, by a calculation circuit (C_CIRC), a first weighted sequence of images, each element of each image being equal to the corresponding element of the first image sequence weighted by the weighting associated with the sector associated with said corresponding element, and a second weighted sequence of images, each element of each image being equal to the corresponding element of the second image sequence weighted by the weighting associated with the sector associated with the corresponding element, / f / obtaining, by a computing circuit (C_CIRC) , an improved image sequence (I (x, y, t)) resulting from a combination of image sequences comprising the first weighted sequence of images and the second weighted sequence of images, / g / obtaining, by a calculation circuit (C_CIRC), a motion estimation on the base of the sequence of improved images (I (x, y, t)) obtained, / h / obtaining, by a calculation circuit (C_CIRC), as a function of the calculated motion estimation, of a spatial alignment images of a sequence of images to be displayed resulting from sequences of images corresponding to the given scene and comprising the sequence of first images (A (x, y, t)) and the sequence of second images (B (x, y, t)), / i / a temporal denoising, by a calculation circuit (C_CIRC), as a function of the determined spatial alignment, of the sequence of images to be displayed.
[0002]
A method of temporally denoting an image sequence according to claim 1, comprising a selection, by a selection circuit, as a sequence of images to be displayed which is denoised in step / i /, of the one of the sequence of first images (A (x, y, t)) and the sequence of second images (B (x, y, t)).
[0003]
3. Method of temporal denoising of a sequence of images according to one of the preceding claims, comprising, at step / c /: / c1 / a obtaining, by a calculation circuit (C_CIRC), for each sector of each of the images of the first image sequence (A (x, y, t)) and the sequence of second images (B (x, y, t)) of a local average of said sector, and / c2 / obtaining, by a calculation circuit (C_CIRC), the first image sequence comprising the subtraction, at each element of each image of the sequence of the first images (A (x, y, t)), of the local average 30 of the sector corresponding to said element, and obtaining the second image sequence comprising the subtraction, at each element of each image of the sequence of the second images (B (x, y, t)), of the local average of the sector corresponding to said element.
[0004]
4. Method of temporal denoising of image sequence according to one of the preceding claims, wherein the first sensor and the second sensor are not spatially harmonized, the method comprising, in the step here, obtaining, by a circuit calculation method (C_CIRC), a first sequence of images resulting from a pre-smoothing of the sequence of first images (A (x, y, t)) and a second sequence of images resulting from a pre-smoothing the sequence of second images (B (x, y, t)).
[0005]
A computer program comprising a sequence of instructions which, when executed by a processor, causes the processor to implement a method according to one of claims 1 to 4.
[0006]
A computer-readable non-transit storage medium, said medium storing a computer program according to claim 5.
[0007]
Optronic device (SYS) for temporal denoising of a sequence of images, said optronic device comprising: a first sensor (SENS_A) arranged to capture a sequence of first images (A (x, y, t)) corresponding to a given scene (SCN), each first image being divided into elements each associated with a corresponding sector of said first image, a second sensor (SENS_B), of a type different from the type of the first sensor, arranged to capture a sequence of second images (B (x, y, t)) corresponding to said given scene (SCN), each second image corresponding to a first image, each second image being divided into elements each associated with a corresponding sector of said second image, each pair of element and associated sector of the second image corresponding to a pair of element and associated sector of the first corresponding image, a calculation circuit (C_CIRC) arranged to obtain a first e sequence of images from the sequence of first images (A (x, y, t)) and a second sequence of images from the sequence of second images (B (x, y, t)), a calculation circuit (C_CIRC) arranged to obtain, for each sector of each of the images of the first and second image sequences an associated weighting, a calculation circuit (C_CIRC) arranged to obtain a first weighted sequence of images, of which each element of each image is equal to the corresponding element of the first image sequence weighted by the weighting associated with the sector associated with the corresponding element and a second weighted sequence of images, each element of each image being equal to the corresponding element of the second an image sequence weighted by the weighting associated with the sector associated with said corresponding element, a calculation circuit (C_CIRC) arranged to obtain an improved image sequence (I (x, y, t)) resulting from a mbinaison of image sequences comprising the first weighted sequence of images and the second weighted sequence of images, a calculation circuit (C_CIRC) arranged to obtain a motion estimation based on the improved image sequence (I ( x, y, t)) obtained, a calculation circuit arranged to obtain, according to the calculated motion estimation, a spatial alignment of the images of a sequence of images to be displayed resulting from image sequences corresponding to the given scene and comprising the sequence of first images (A (x, y, t)) and the sequence of second images (B (x, y, t)), a calculation circuit (C_CIRC) arranged to carry out a temporal denoising, according to the determined spatial alignment, the sequence of images to be displayed.
[0008]
An optoelectronic device (SYS) according to claim 7, comprising a selection circuit arranged to select, as a sequence of images to be displayed to be denoised, one of the sequence of first images (A (x, y , t)) and the sequence of second images (B (x, y, t)).
[0009]
9. optronic device (SYS) according to one of claims 7 and 8, wherein the calculation circuit (C_CIRC) arranged to obtain the first and second image sequences is arranged to calculate, for each sector of each of the images of the sequence of first images (A (x, y, t)) and the sequence of second images (B (x, y, t)), a local average of said sector, and then to obtain the first image sequence by subtraction to each element of each image of the sequence of the first images (A (x, y, t)) of the local average of the sector corresponding to said element, and to obtain the sequence of second images by subtraction from each element of each image of the sequence of the second images (B (x, y, t)) of the local average of the sector corresponding to said element.
[0010]
Optronic device (SYS) according to one of Claims 7 to 9, in which the first sensor (SENS_A) and the second sensor (SENS_B) are not spatially harmonized, in which the calculation circuit (C_CIRC) arranged to obtain the first and second image sequences are arranged to obtain a first sequence of images resulting from a pre-smoothing of the sequence of first images (A (x, y, t)) and a second sequence of images resulting from pre-smoothing the sequence of second images (B (x, y, t)).
类似技术:
公开号 | 公开日 | 专利标题
EP3114831B1|2021-06-09|Optimised video denoising for heterogeneous multisensor system
CA2592293C|2015-04-14|Method for processing images using automatic georeferencing of images derived from a pair of images captured in the same focal plane
EP0781041B1|2000-07-26|Progressive frames interpolation system
EP1412918B1|2007-02-21|Method and system for producing formatted data related to geometric distortions
CA2834883C|2018-01-23|Method of controlling an action, such as a sharpness modification, using a colour digital image
EP1415275B1|2008-07-02|Method and system for providing formatted information to image processing means
EP2174289B1|2016-11-02|Method for processing a digital object and related system
EP3113103A1|2017-01-04|High-resolution camera unit for drone, with correction of wobble distortion
FR2882160A1|2006-08-18|Video image capturing method for e.g. portable telephone, involves measuring local movements on edges of images and estimating parameters of global movement model in utilizing result of measurement of local movements
EP2457379B1|2020-01-15|Method for estimating a defect in an image capture system and associated systems
FR2594281A1|1987-08-14|APPARATUS FOR ESTIMATING NOISE IN SIGNALS HAVING REDUNDANT INTERVALS
EP3221841B1|2018-08-29|Method and device for the real-time adaptive filtering of noisy depth or disparity images
FR2964490A1|2012-03-09|METHOD FOR DEMOSAICING DIGITAL RAW IMAGE, COMPUTER PROGRAM, AND CORRESPONDING IMAGING OR GRAPHICS CIRCUIT
EP0780794B1|2003-03-05|Motion estimation correction process in images with a periodical structure
FR2996034A1|2014-03-28|Method for generating high dynamic range image representing scene in e.g. digital still camera, involves generating composite images by superposition of obtained images, and generating high dynamic range image using composite images
FR2899696A1|2007-10-12|METHOD FOR PROCESSING A RELATIVE LIGHT PHENOMENON ON A DIGITAL IMAGE AND ASSOCIATED TREATMENT SYSTEM
FR2742902A1|1997-06-27|MOTION ESTIMATION METHOD
EP0843867B1|1999-05-19|Method for processing a noisy digital image source sequence
FR3054093B1|2019-08-23|METHOD AND DEVICE FOR DETECTING AN IMAGE SENSOR
WO2013045853A1|2013-04-04|Method and device for filtering a disparity map
EP3301644B1|2020-10-21|Method for building a depth map of a scene and/or an all-in-focus image
FR2689354A1|1993-10-01|Recursive procedure for improving degraded video image - has movement detector to determine necessary digital image processing to compensate for movement during image acquisition
FR3007872B1|2019-07-05|METHOD OF ESTIMATING THE TOTAL IMAGE OF A SCENE
FR3078427A1|2019-08-30|DYNAMIC DETECTION OF LIGHT PARASITE IN A DIGITAL IMAGE
FR3103940A1|2021-06-04|Image processing method and device
同族专利:
公开号 | 公开日
US20170078546A1|2017-03-16|
FR3018147B1|2016-03-04|
EP3114831B1|2021-06-09|
US10116851B2|2018-10-30|
WO2015132255A1|2015-09-11|
EP3114831A1|2017-01-11|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
US20060119710A1|2002-06-21|2006-06-08|Moshe Ben-Ezra|Systems and methods for de-blurring motion blurred images|
US20090002501A1|2007-06-27|2009-01-01|Micron Technology, Inc.|Image blur correction using a secondary camera|
US20130250123A1|2011-11-04|2013-09-26|Qualcomm Incorporated|Multispectral imaging system|CN105979195A|2016-05-26|2016-09-28|努比亚技术有限公司|Video image processing apparatus and method|JP4733127B2|2004-07-30|2011-07-27|アルゴリスインコーポレイテッド|Apparatus and method for adaptive 3D noise reduction|
JP2011523538A|2008-05-20|2011-08-11|ペリカンイメージングコーポレイション|Image capture and processing using monolithic camera arrays with different types of imagers|
FR2955007B1|2010-01-04|2012-02-17|Sagem Defense Securite|ESTIMATION OF GLOBAL MOVEMENT AND DENSE|
EP2382617A1|2010-01-19|2011-11-02|Pixar|Selective diffusion of filtered edges in images|
FR2978273B1|2011-07-22|2013-08-09|Thales Sa|METHOD OF REDUCING NOISE IN A SEQUENCE OF FLUOROSCOPIC IMAGES BY TEMPORAL AND SPATIAL FILTRATION|
WO2014054273A1|2012-10-04|2014-04-10|パナソニック株式会社|Image noise removal device, and image noise removal method|
WO2015011707A1|2013-07-23|2015-01-29|Ben Israel Michael|Digital image processing|DE102015217253A1|2015-09-10|2017-03-16|Robert Bosch Gmbh|Environment detecting device for a vehicle and method for capturing an image by means of an environment detecting device|
FR3043823B1|2015-11-12|2017-12-22|Sagem Defense Securite|METHOD FOR DECAMOUFLING AN OBJECT|
GB201615776D0|2016-09-16|2016-11-02|Rolls Royce Plc|Spatiotemporal registation of image streams|
US10567645B2|2017-05-17|2020-02-18|Samsung Electronics Co., Ltd.|Method and apparatus for capturing video data|
法律状态:
2015-03-19| PLFP| Fee payment|Year of fee payment: 2 |
2016-02-19| PLFP| Fee payment|Year of fee payment: 3 |
2017-02-21| PLFP| Fee payment|Year of fee payment: 4 |
2017-03-03| CD| Change of name or company name|Owner name: SAGEM DEFENSE SECURITE, FR Effective date: 20170126 |
2018-02-20| PLFP| Fee payment|Year of fee payment: 5 |
2019-02-20| PLFP| Fee payment|Year of fee payment: 6 |
2020-02-20| PLFP| Fee payment|Year of fee payment: 7 |
2021-02-18| PLFP| Fee payment|Year of fee payment: 8 |
2022-02-21| PLFP| Fee payment|Year of fee payment: 9 |
优先权:
申请号 | 申请日 | 专利标题
FR1451719A|FR3018147B1|2014-03-03|2014-03-03|OPTIMIZED VIDEO DEBRISING FOR MULTI-SENSOR HETEROGENEOUS SYSTEM|FR1451719A| FR3018147B1|2014-03-03|2014-03-03|OPTIMIZED VIDEO DEBRISING FOR MULTI-SENSOR HETEROGENEOUS SYSTEM|
US15/123,508| US10116851B2|2014-03-03|2015-03-03|Optimized video denoising for heterogeneous multisensor system|
PCT/EP2015/054409| WO2015132255A1|2014-03-03|2015-03-03|Optimised video denoising for heterogeneous multisensor system|
EP15707385.9A| EP3114831B1|2014-03-03|2015-03-03|Optimised video denoising for heterogeneous multisensor system|
[返回顶部]