![]() METHOD FOR IDENTIFYING A MONO OR BIDIMENSIONAL BAR CODE IN ENTRY IMAGE DATA, BAR CODE READING DEVICE
专利摘要:
METHOD FOR IDENTIFYING A MONO OR BIDIMENSIONAL BAR CODE IN INPUT IMAGE DATA, BAR CODE READING DEVICE, COMPUTER PROGRAM AND COMPUTER PROGRAM PRODUCT. Method for identifying a mono or two-dimensional bar code in input image data, the method comprising the steps of: obtaining first image data from a first image of the object, said first image being acquired using a first wavelength of lighting; obtaining second image data from a second image of the object, said second image being acquired using a second illumination wavelength being different from said first illumination wavelength; calculating a weighting factor based on statistical processing of pixel values of the first image data and pixel values of the second image data; and generating third image data when calculating a weighted combination using the pixel values of said first image data, the pixel values of said second image data, and said weighting factor. 公开号:BR112013012955B1 申请号:R112013012955-7 申请日:2013-05-21 公开日:2020-12-29 发明作者:Pablo SEMPERE;Amine ZAHAR;Andras SZAPPANOS;Oliver François 申请人:Sicpa Holding Sa; IPC主号:
专利说明:
Technical Field The present invention relates to the identification of a mono or two-dimensional bar code in input image data. The present invention relates specifically to corresponding methods, barcode reading device, computer programs and computer program products. Modalities of the present invention employ image processing of digital image data from images taken from an object with a bar code attached. Foundations Mono and two-dimensional bar codes are common today in the marking, tracking and authentication of objects, such as consumer products, food, beverage packaging, cans and bottles, cigarette packaging and other tobacco products, documents, certificates, bills financial and the like. Barcodes appear in several ways, two examples of which are shown in Figures 1A and 1B: the common one-dimensional barcode in Figure 1A usually comprises an array of elements such as, for example, black and white lines 1 ', 2'. The information is encoded by concatenating pre-defined groups of black and white lines 1 ', and 2' of varying thicknesses and distances. These groups are usually associated with a specific character or signify some kind of industry standard. Figure 1B shows a common two-dimensional bar code that encodes information through the arrangement, in general terms, of first type 1 '' elements and second type 2 '' elements, such as rectangles, points, triangles and the like along dimensions in some sort of ordered grid. The example in Figure 1B follows an implementation according to the GS1 (trademark) standard DataMatrix ECC 200 (GS1 being an international association providing standards for dimensional bar codes). This pattern, for example, employs a so-called “L locator pattern” 4 (also called L-shaped solid line, L line, solid line etc.) and a “clock range” 3 (also called clock line, L-shaped clock line etc.) around the data 5 that carry the actual payload data from the barcode. In both cases of one-dimensional and two-dimensional bar codes, at least two distinguishable types of elements are used. For example, a square printed in white as a first type element can represent information 0, while a square printed in black as a second type element represents information 1. In any case, however, implementations through white and black lines or dots (elements) represent just one type of example. Specifically, bar codes can also be well implemented when using colored and / or fluorescent inks or dyes, thermal printing on heat-sensitive paper, mechanical means such as milling, printing, grinding or physical / chemical means, such as engraving laser, acid etching etc. Any type of implementation is possible, insofar as the elements can be distinguished in their respective types, for example, image data that was obtained from the two-dimensional bar code and is normally attached to some type of object (well). For example, a digital camera can obtain digital image data from the bar code that is printed on a paper document or laser engraved in a 20-metal can. Once the barcode is attached to an object, the encoded information can then be retrieved later using barcode reading devices. Such devices usually obtain first the referred image data that was required when using, for example, the already mentioned digital camera. Another acquisition support can be provided by means of lighting devices, such as LEDs, lasers and other light sources. The barcode reading devices can then employ processing resources, for example, in the form of a microprocessor (CPU) and an associated memory, to process the obtained image data. Typically, such processing involves isolating (identifying) the barcode in the image data and decoding the payload data. The decoded data can then be further processed, displayed or transmitted to other entities. Such image processing usually involves separating the barcode image section from the bottom surface, which, in turn, may comprise similar characteristics, since it is part of the product label or packaging design. For example, a conventional technique considers showing the simultaneous image of a barcode from three different lighting directions using multiple lighting colors. There, three monochrome images are acquired simultaneously, and a processor determines which of the images provides the best contrast and then is assumed to be the best decodable image. In addition, conventional techniques also consider calculating a linear combination of images for the specific case in which fluorophores are employed. In addition to reliability, that is, the figure that characterizes the fraction of success and / or bar codes correctly decoded from a given number of input images, therefore, other considerations may play a considerable role. Among others, process efficiency and speed can also determine whether or not a way of processing the image is suitable for barcode identification and / or decoding. Therefore, there is a need for improved concepts of barcode identification and / or decoding in image data. These concepts need to provide satisfactory output reliability, while meeting strict processing speed requirements, that is, the time required between obtaining the image and providing a processing result. Summary of the Invention The problems mentioned are solved by the object of the independent claims. Additionally preferred modalities are defined in the dependent claims. In accordance with an embodiment of the present invention, a method is provided to identify a mono or two-dimensional bar code in input image data, the method comprising the steps of: obtaining first image data from a first image of the object, said first image being acquired when using a first illumination wavelength; obtaining second image data from a second image of the object, said second image being acquired by using a second illumination wavelength being different from said first illumination wavelength; calculating a weighting factor based on a statistical process of pixel values of the first image data and pixel values of the second image data; and generate third image data when calculating a weighted combination using the pixel values of said first data of image, the pixel values of said second image data and said weighting factor. In accordance with another embodiment of the present invention, a barcode reading device is provided comprising processing facilities 5 configured to identify a mono or two-dimensional barcode in input image data including: obtaining first image data from a first object image, said first image being acquired using a first illumination wavelength; obtaining second image data from a second image of the object, said second image being acquired using a second illumination wavelength being different from said first illumination wavelength; calculating a weighting factor based on statistical processing of pixel values of the first image data and pixel values of the second image data; and generating third image data when calculating a weighted combination using the pixel values of said first image data, the pixel values of said second image data and said weighting factor. In accordance with additional embodiments of the present invention, a computer program comprising code is provided, which, when executed in a processing facility, implements a modality of the method of the present invention, and a corresponding computer program product. Brief description of the figures Modalities of the present invention, which are presented for a better understanding of the inventive concepts and which are not the same as limiting the invention, will now be described with reference to the figures, which show: Figures 1Ae1B show schematic views of the exemplary conventional bar code; Figures 2A and 2B show schematic views of scenarios of 30 a bar code being attached to an exemplary object; Figures 3A to 3D show schematic visualizations of image data being subjected to processing according to an embodiment of the present invention; Figures 4D to 3C show schematic visualizations of image data being subjected to processing according to an additional embodiment of the present invention; Figure 5 shows a schematic view of an architecture according to an embodiment of the present invention; Figures 6A and 6B show schematic views of modules according to additional embodiments of the present invention; Figure 7 shows a flow chart of an embodiment of the method of the present invention; Figures 8A to 8C show schematic views of the apparatus and device modalities of the present invention; and Figures 9A to 9D show schematic visualizations of histograms of the image data. Detailed Description Figures 2A and 2B show schematic views of usual scenarios, in which modalities of the present invention can be applied. As shown, two-dimensional bar codes 20, 20 'are attached to an object 21. Object 21 can be any marking entity, to which a bar code can be attached using the techniques already mentioned. The exemplary object 21 shown in Figures 2A and 2B is a consumer goods package, which, throughout common practice, comprises some design features, such as the product logo 22 shown. Bar codes 20, 20 'can be attached to object 21 in order to completely or partially overlap the design features, such as the logo 21. Through this example, it is clear that there is no guarantee that bar code 20 , 20 'appears in some dedicated area or position that allows a given minimum contrast in an image and in image data taken from object 21. Instead, the barcode image in the image data may overlap with images of other characteristics ( then 22). Figure 2A shows an exemplary black and white implementation of barcode 20, whereas Figure 2B describes the situation of an “invisible” barcode 30 ', that is - for example - printed using fluorescent inks that can be invisible to the human eye. Specifically, such inks can produce light with a wavelength to which the human eye is not sensitive, or such inks may require special lighting, for example, ultraviolet (UV) or infrared (IR) lighting in order to produce light 35 against an invisible wavelength margin or a visible wavelength margin. In the latter case, barcode 20 ’remains invisible as much as possible, as special lighting is not employed. Figures 3A to 3D show schematic visualizations of image data being subjected to processing according to an embodiment of the present invention. In general, at least two images are acquired from the object using different wavelengths. Thus, first image data is obtained from a first image of the object, in which the first image is acquired using a first illumination wavelength, and second image data is obtained from a second image 10 of the object, in which this second image is acquired using a second illumination wavelength. A schematic view of the first image data obtained is given by Figure 3A, and a schematic view of the second image data obtained is given by Figure 3B. It should be noted that the schematic views of the figure 15 are characterized in areas of representative model of similar pixel values by means of corresponding density of thin line. For example, areas with dense lines may represent areas with “dark” or comparatively low pixel values, whereas areas with less dense lines may represent areas with “brightness” or comparatively high pixel values. 20 That is, in any case, arbitrarily, or comparatively high pixel values represent dark colors, bright colors, high or low intensities, high or low saturations, or other known and common figures in digital image processing. A common image processing rule considers that pixel intensities vary within a range of 0 to 255 (that is, the scope of 1 bit). In general, pixel values being computed as negative values can be set to 0, and similarly, pixel values that exceed 255 can be set to 255. According to an embodiment of the present invention, the two sets of the first and second image data are obtained, shown schematically in Figures 3A and 3B, acquired with two different illuminations, that is, wavelengths that can be selected from a margin or set of various wavelengths available, for example, those offered by a barcode scanner. An object is now given a linear combination that maximizes contrast by providing a virtually bottomless image, in the sense that most of the image's barcode remains and everything else (the background) is removed. Such processed image data can then be advantageously provided to continue processing for current decoding of payload data. The improvement of the object of interest, that is, of the bar code, in such image data substantially facilitates further processing, since the bar code can be more reliably identified and decoded. In the case illustrated in Figure 3A, elements of a barcode type 30 appear with a "light" pixel intensity (or value), while the background 31 appears "dark". In Figure 3B, representing the second image data, both the barcode elements and the background, respectively, appear darker when compared to the first image data. For reasons of exemplary explanation, the pixel value of the barcode element can be assumed in the first and second image data, such as 200 and 120, respectively, and the pixel value of the background can be assumed in the first and second image data. image as 60 and 45, respectively. Now, one of the first and second image data is selected to be multiplied by a weighting factor W for further calculation of a linear combination over the general equation. Image 3 = F * (Image 1 - W * Image 2), (1) where Image 1, Image 2 and Image 3 represent the totality of pixel values of an image data set, W the weighting factor and F represents an optional scale factor. Both the weighting factor W and the optional scale factor F are described in more detail below in conjunction with methods designed to find the ideal values for F and W that maximize image contrast. In a way, the best pair of parameters (F, W) must be obtained in such a way that, when entering the image data (Image 1, Image 2), they result in a final image with the last background and the best cons between the points / lines of the barcode and the bottom remaining. The intermediate result (Image 1 - W - Image 2) refers to the third image data in the modalities of the present invention and the result of equation (1) refers to the fourth image data in modalities of the present invention. In equation (1) the image data 1 to 3 are represented by means of a pixel the pixel operation, meaning that for each pixel of the second image data with coordinates (i, j), the respective pixel value is multiplied by W, and then subtracted from the pixel value of the pixel (i, j) of the second image data, so as to form the pixel value with coordinates (i, j) of the third image data. The intention is to calculate the weighting factor W in a way in which the background pixel values of both image data become simulated. Following the numerical example above, setting W to 1,333 will yield image data (W * Image 2, shown schematically in Figure 3c) in which the barcode elements take on a pixel value of 160, while the background takes on the value of 60 Subsequent subtraction, as defined by equation (1), is intended to provide an image with a trailing background, that is, that is equal to 0, and finite barcode pixel values that are clearly distinguishable, for example , the value equal to 40, as shown schematically in Figure 3D. The ideal multiplication using the scale factor F of, that is, 5, resales the pixel intensities to 200, as it was initially for Image 1, but keeps the background at 0. As shown schematically in Figure 3E, a clear improvement is obtained when compared to the first and / or second initial image data. In other words, the proposed technique is based on illuminating an object with different colors and collecting the different luminescent responses from the background and the front. An additional exemplary modality may consider image acquisition of fluorescent bar codes near IR (NIR), while sequentially illuminating objects (not simultaneously) with red, blue, green, IR and without light. Then a weighting factor W is computed, which, multiplied by the image acquired with green lighting, provides a “new” image G ’= W * G with a background comparable to the image acquired with blue lighting. Thus, a subtraction (B - W * G) provides a free or virtually free background image. Finally, the optional multiplicative factor F can again scale the pixel intensities to provide pixel values that are suitable for further processing. Illuminating a barcode with different colors will usually produce pixel values that are substantially proportional to the excitation spectrum of the ink used to paint the barcode (for example, a measured nominal characteristic). Such an excitation spectrum can be specific to the dye, such that variations in the dye response, when excited in red, blue, green or IR, will be different from variations in the background response when exposed to the same colors. In the exemplary case of fluorescent inks, it can be assumed that luminescent background pigments also possibly fluorescent will have their excitation spectrum distinguishable, so that they correspond only a little to the characteristics of the ink specifically used to paint barcode 5 (visible or invisible) used. These considerations should allow, in most cases, to find an ideal linear combination that equalizes the background and - in turn - improves the bar code in the image data. Figures 4A to 4C show schematic views of additional image data that are considered by another embodiment of the present invention. This modality considers, in a way, a selection of the pair of wavelengths that are used, since it may happen that some parts of the object react quite differently to the wavelengths employed and colors of illumination. This can be driven by a specific selection of a pair of illumination wavelengths (first and second image data) 15 for which all background areas behave similarly. This problem is specifically illustrated in Figures 4A to 4C, when compared to Figures 3A to 3E. Figure 4A again shows input image data already available in conjunction with Figure 3A. However, Figure 4B shows image data acquired by a third wavelength being different from both the first and second wavelengths mentioned above. As can be seen from the schematic representation of Figure 4C, the subtraction removes most of the background, but a pattern remains, which - for example - surrounds the logo. This is due to the fact that some characteristics of the object are fluorescent when illuminated at one wavelength, while having no fluorescent response when illuminated at another wavelength. Because of this phenomenon, some regions may appear bright in one color, but dark in others. Therefore, it may be preferable to select the appropriate pair of wavelengths previously, or alternatively, acquire more than two images with more than 30 lighting wavelengths and compare the respective outputs (for example, Figure 3E vs. Figure 4C) in order to determine the wavelengths for subsequent or current use. Considering the contour shape of the respective linear combination results can be a way of making such a selection. Figure 4 shows a schematic view of an architecture according to an embodiment of the present invention. In general, this architecture considers a collection of modules that are implemented, for example, through corresponding sections of instructions or codes, for each functionality involved. The modules shown are part of a complete workflow process starting from the acquisition of the images and ending with the final computed image, and sending it to the decoding mechanism (further processing). In 401 (“ALLOW COMBINATION MODE”) the proposed mode is allowed by a modality of the present invention, that is, by a weighted linear combination of the above equation (1). In general, equation (1) was defined to be designed to remove any background and improve background contrast. In a way, this serves to obtain a “final” image with a minimal background. In equation (1), the first image data Image 1 can be those that show the strongest response of the ink / dye used for the barcode, that is, the object of interest. At 404 (“IMAGE CAPTURE PROCESS”), a first image is acquired with a first wavelength at 402 (“Image Capture 1 with Light Source 1,”), and a second image is acquired with a second length 403 (“Image Capture 2 with Light Source 2”). Through an ideal module 406 (“IMAGE QUALITY CONTROL”) with algorithm 405 (“Dark image checking algorithm”) the corresponding image data is checked for dark, and the result is checked in 410 (“Image quality Yes No "). If it is not enough (407 = “NO”), the 404 image capture process is repeated. This repetition can be limited by a 408 counter (“N = 1, ... 10”). A module 413 (“COMPUTER FACTOR”, “ALGORITHM MODULE”) with algorithm 411 (“Adjustment background factors (-) algorithm W”) and algorithm 412 (“Adjustment image histogram (*) algorithm F”) receives input image data, if "YES" in 409 or directly from process 404. Calculated factors 414 ("W, F") are provided for image processing module 415 ("IMAGE PROCESSING MODULE") , employing a 416 library (“DECODING LIBRARY”) and provides output 417 (“DM CODE”, “BAR CODE DATA”). The objective of the 411 algorithm is to find the best match between the backgrounds of the two captured images. This algorithm will compute the W parameter that will be used to subtract the second image Image 2 from the first image Image 1. Since each image is illuminated uniformly, but with different wavelengths, a real number W can normally be found, which matches the backgrounds of the two images. Optionally, one can consider find a fraction of the image and corresponding image data with a so-called region of interest (ROI) to find the W parameter within the images. An ROI can be employed to avoid edge effects of the images (the edge of the image can be quite bright) and to minimize the impact of noise, especially in the corners of the image. One way to calculate the W factor includes retrieving the histogram for each image (number of pixels in a pixel value vs. pixel value). Based on this histogram, different specific values can be computed. A formula to calculate the parameter that matches the image is given by W = ∑j∑j Image 11, j / ∑j∑j Image 2it j (2) where the sum ∑j goes from the minimum x coordinate value of the ROI up to the maximum X coordinate value of the ROI, the sum ∑j goes from the minimum Y coordinate value of the ROI to the maximum Y coordinate value of the ROI, Image 1fi j is the pixel value of the first image data in the coordinate (i , j), and Image 2j, j is the pixel value of the second image data in the coordinate (i, j). In one way, W, as computed by the above equation (2) is indicative of an average value of difference in pixel intensity between the two images. This approach may assume that the barcode ink may not react in the same way as the background, depending on the wavelength of the lighting. When trying to equalize the background, one is able to remove most of the background and keep the bar code. One can also consider that, if W is too high, then a deterioration of the bar code may occur. Therefore, the accuracy of the algorithm is preferably guaranteed. Figure 6A describes a workflow of algorithm 411 implemented to find the multiplicative factor W. Specifically the first image data 511 (“Image 1”) and the second image data 514 (“Image 2”) are subject to image processing. image 519 (“IMAGE PROCESSING”) to obtain corresponding histograms 512 (“Histogram Image 1”) and 515 (“Histogram Image 2”). Histograms 512, 515 are subject to calculation 520 (“CALCULATION”) and the respective averages are computed in 513 (“Computing the Mean (H1 mean) histogram value 1 within ROI”) and in 516 (“Computing the mean ( mean H2) histogram value 2 within ROI ”). The average values 521 (“average H1) and 522 (“ average H2 ”) are those provided for computing the average 517 (“ computing the average: average H1 / average H2), and the final result 518 (“W”) is obtained. After finding the W factor, an optional check can be performed to verify that the W value is coherent / consistent with the given requirements or limits. Alternative means for calculating the weighting factor include using an average value of the first image data divided by an average value of the second image data, an average value of the first image data divided by an average value of the second image data, a value minimum of the first image data divided by a minimum value of the second 10 image data, a standard deviation value (StDev) of the first image data divided by a standard deviation value of the second image data, an average pixel value per pixel ratio between first image data and second image data (that is, the mean of Image 1 i, j / Image 2j, j), and also calculating a linear regression for a data set X and Y, for example , the data set X being the pixel intensities of Image 1 and the data set Y being the pixel intensities of Image 2 and calculating the adjusted line slope as being the W scale. The examples mentioned above can be cal through the entire image data or only through a selected fraction, 20 such as a user-defined ROI. In addition, calculations can be made in the histogram of the complete image or complete ROI, or in a histogram respectively truncated. In the latter case, the histogram is truncated using a given percentage of population corresponding to the maximum intensities. The percentage of population that will not be taken into account will correspond to: α - barcode area [in pixels] / ROI area [in pixels]. Thus, an alpha (a) would select points above in terms of intensity and discard them for the computation of the W scale. However, therefore, a vector W = (wi, ... wn) could be used in the sense of specific values Wj for specific regions. This method allows 30 to select, from a set of images taken with different illumination wavelengths, the pair of the two images that best fit. The sequence for determining W as an array of various values (usually <5) can be as follows: (1) obtain different images with different excitation / illumination wavelengths (Image 1, Image 2, 35 Image 3 etc.) ; (2) truncate the percentage of the histogram population as explained above; (3) select pair of images with the closest background based on the correlation between histograms and properties; (4) determine the maximum location of the pixel intensity histograms in each selected image when adjusting polynomial or peak recognition; (5) identify the regions by isolating the maximum location from the histograms. A region in a histogram of 5 pixel intensity actually corresponds to a range of pixel intensities (that is, a spatial region in the image); (6) the algorithm will now have identified a series of regions (n, with i typically from 1 to 5); for each given region n, calculate the ratio between the maximum location in Image 1 and Image 2. The ratio calculated for the region rf will now be Wj. Apply this Wj by regions for 10 Image 2 and then proceed through equation (1): 11 - Wj * I2; and (7) determine F with the usual procedure and multiply the resulting image using this value. In other words, the vector W above can be further illustrated as follows. The maximum location will probably be in different positions in the histograms of Image 1 and 2. First, one can select the pair of 15 images 1 and 2 that most closely resemble all the available colors (for example, if they are the most similar, the more colors are in question). For this, the following criteria can be used: (1) the maximum location number must be the same for the selected image pair, even if their positions are different; and (2) the correlation coefficient calculated from histograms 20 of the 2 images may be the largest. Once an ROI is selected and a histogram is constructed, one can perceive that a red image is different from the blue and green image, since the red one can only be present in a few different numbers from the maximum location and can therefore characterize a different histogram profile. Therefore, it is preferable to obtain a correlation coefficient using the green and blue images, since using red and blue or red and green yield comparatively poor coefficients. The correlation can be calculated using histogram data values (only along a y direction) or using arrays given by position * value (ie, x * y), which can facilitate the discrimination of the best pair of images. The comparison of the histograms can thus allow the determination of the ideal images when calculating vector W (for example, blue and green, as above). Once they are selected, one can define regions around the maximum location in Image 2 (green in this case) and then apply a different ratio to each region, given by the maximum location value. Once one multiplies the 35 different regions of Image 2 by means of this vector W, one can produce an image, then it can be subtracted from Image 1 (blue). A comparison of the result achieved with scale and vector W's can show that a more efficient background subtraction when using the vector approach can be obtained. The concepts of similar histograms and properties, such as a maximum location count, are also illustrated in conjunction with Figures 9A to 9C showing schematic views of histograms from a green image (Figure 9A with three maximum locations), a blue image (Figure 9B with three maximum locations) and a red image (Figure 9C with only two maximum locations). The respective dashed lines describe the entire histogram, while the solid lines describe the truncated histograms. The regions are explained together with Figure 9D, which shows three regions 901, 902 and 903, separated with dashed lines around the corresponding maximum location. In addition, a "weight matrix" W = Wj, j could also be used as follows. An alternative constitutes a so-called correction matrix, which is used as follows: W = Ws * (1/2) * [α / (Image 1y / Image 2jj)], where Ws is the W scale defined in equation ( 2) and α is the maximum of the Image ly / Image 2g ratio. This approach aims to find the matrix W that equalizes the background pixel by pixel, but avoids modifying the ink dots, that is, the pixel that represents barcode elements. Another alternative is a so-called contrast enhancement matrix. For this approach, two images are used, which are similar enough. One way to determine them can be through one of the described modalities. Once the two images Image 1 and Image 2 have been acquired, the Image 1y / Image 2y ratio is calculated pixel by pixel. Image 2 can be selected as the image where the barcode elements appear with comparatively less pixel intensity. Thus, the contrast of the resulting matrix is improved by using conversion techniques, such as quadrature pixel intensities or histogram equalization. The resulting matrix will be the contrast enhancement matrix W. Therefore, Image 1 (or even a third Image 3) is divided by W. This approach aims to find matrix W, which reduces the intensity of the background, while maximizing the intensity of the points (barcode elements). Thus, in one way, matrix W acts as a contrast enhancer. This constitutes a weighted non-linear combination. In any case, the examples above are specific implementations of statistical processing to calculate the weighting factor W, in its respective scale, vector or matrix form, using the modalities of the invention. One way to calculate the F factor by implementing an algorithm 412 is shown with Figure 6B. The object of algorithm 412 should be to further enhance the contrast between the front (barcode) and the background and provide a better quality image for the 416 decoding library. In a way, image data is linearly extended by the F factor , which can be computed by: F = 255 / maxi max, (Image 1jj-W x Image 2jj) (3) where maxi yields the maximum and I goes from the minimum X coordinate value of the ROI to the maximum coordinate value X of the ROI, and maxj yields the maximum and j goes from the minimum value of the Y coordinate of the ROI to the maximum value of the Y coordinate of the ROI, Image 1 jj is the pixel value of the first image data in the coordinate (i, j), and Image 2JJ is the pixel value of the second image data in the coordinate (i, j). The value 255 in equation (3) is an example for a maximum pixel value given by the bit length of the variable type used in the image data. Using this value, you can be sure to extend the image until it starts to have saturated dots. After finding the F factor, you can continue computing 415 of the final image based on the pair (W, F). Among others, one may consider that the F factor may need to saturate the points / lines of the bar codes as much as possible, F should not increase the background intensity level too much in order to avoid fluorescence, and that F should be an exchange between the restrictions expressed below. Again with reference to Figure 5, the optional module 406 for image quality control is described. This optional module will check the quality of the image taken to verify that the image is compatible with the following algorithms. Module 406 or algorithm 405 can, for example, issue a variable Boolean after calculation: a Yes 409, if the image is good enough, and then allows the algorithm to perform the next step, or a No 409, if the image does not is compatible. Then it will go to the next 404 image capture step and try to take new images. This step can have a maximum number of attempts, that is, 10 attempts before leaving the algorithm. The maximum value for this 408 lap can be exchanged by a user. The darkness check algorithm 405 considers cases in which the input image does not show any object of interest. This can happen at the beginning of object tracking using a camera, when there is no bar code on the object or on the object as a whole. To consider this case before performing any heavier calculations, one should consider checking the average intensity level of the image and comparing it to a certain limit. If the average level falls below the limit, it means that the image is too dark to be corrected and therefore too dark to be treated. Thus, the 405 algorithm test can return a negative Boolean. The 405 algorithm can also be described by (1) computing an average level of pixel intensity “μ”, (2) comparing μ and a limit defined by À = μ - limit; and (3) if À> 0, then returns “FALSE”; otherwise, if À <0, then it returns “TRUE”. Figure 7 shows a flow chart of an embodiment of the method of the present invention. Specifically, modalities of the method of the present invention are designed to identify a mono or two-dimensional bar code in input image data and to provide an improved output that can be further processed to decode the payload data of a bar code. These method modalities consider a step 701 of obtaining the first image data of a first image of the object, said first image being acquired when using a first illumination wavelength, a step 702 of obtaining the second image data of a second image of the object, said second image being acquired when using a second wavelength of illumination being different from said first wavelength of illumination, a step 703 of calculating a weighting factor based on a statistical processing of values of pixel of the first image data and pixel values of the second image data; and a step 704 of generating the third image data by calculating a weighting combination using the pixel values of said first image data, the pixel values of said second image data, and said weighting factor. An optional step 705 is to calculate the fourth image data based on the multiplication of said third image data by means of a scale factor. Either said third party or said fourth image data is provided for further processing and / or decoding. 8A shows a general apparatus embodiment of the present invention. Specifically, an apparatus 100 comprising a processing unit 110 and a memory unit 120 (processing facilities) is shown. The memory unit 120 can store a code which, when executed in the processing unit 110, implements one or more modalities of the method of the present invention. The device 100 may additionally comprise means for illuminating the object with at least two different wavelengths. Optionally, apparatus 100 may comprise an imaging unit 131 for acquiring a digital image of the barcode as being applied, for example, to an object. In addition, apparatus 100 may comprise a communication unit 132 for communicating an identification and / or decoding results to other entities, such as servers, controllers and the like. Communication can be carried out through a network, such as a local area network (LAN), wireless network (WLAN), internet and the like. In addition, bus systems, such as CAN, can also be used for data exchange. Figure 8B shows a schematic view of a portable modality of an apparatus for taking an image of the barcode and identifying and (optionally) decoding the image. The apparatus 100 'comprises a window through which a digital image of an object can be acquired. A barcode is applied to the object using any method of mechanical, physical or chemical painting. The apparatus 100 'may additionally comprise an integrated processing unit (not shown) for carrying out one or more methods of the embodiment of the present invention. An additional operating element can be provided to switch the device 100 'on and off and / or start taking an image, acquire / obtain the respective digital image data, and / or process the digital image data, in order to identify and / or decode the barcode. The apparatus 100 'may additionally comprise means for illuminating the object with at least two different wavelengths. The device can, of course, take other forms and can be connected with or without wires. Figure 8C shows a schematic view of a fixed type of apparatus for taking a barcode image and identifying and (optionally) decoding it. In this sense, for example, a module operable to be mounted on a production / distribution line to identify bar codes arranged on objects transported on that line. Again, the apparatus 100 ”comprises a window, through which a digital image of an object can be acquired with a bar code. The apparatus 100 "may further comprise an integrated processing unit (not shown) to carry out one or more methods of the modalities of the present invention. An additional fixing element may be provided to mount the apparatus 100" on a production line, for example. example, in which a plurality of objects pass through the apparatus 100 "for identification. The apparatus 100" may further comprise means for illuminating the object with at least two different wavelengths. According to another embodiment, said means may configured separately from the 100 ”device, in order to contribute to optimal lighting and image acquisition. The device can, of course, take other forms and can be connected wired or wirelessly. The general concept is to make use of obtaining / acquiring a image of an object, while it is sequentially illuminated with various wavelengths, and use the difference in the sample front (barcode) and background response for those lengths in order to subtract the background from the image and remain only with the front. Once the images corresponding to different excitation wavelengths are acquired, the mode for producing the background subtraction is stopped, if it is to use an algorithm that tries to find the linear combination of the relevant pair of images that equalizes the background, while keeping the front different, so that a subtraction of images would leave only the front in the image (with the background being removed entirely or partially). Once the background is removed, the resulting image 20 with improved contrast can be sent to the decoding mechanism. The following are examples in which the proposed techniques provide a distinct benefit. Situations in which the present technique can help to decode bar codes on complex backgrounds, which are typically 25 flaws for decoding with a pattern, include the case of black ink on the complex fluorescent background. An example to show how this can help to decode complex covers is problematic in those objects that have not only the background that becomes fluorescent when excited in red, but also when excited in blue. The results are substantially improved 30, since the ideal weighted combination found, clearly improved the image contrast, especially when compared to the standard image that should be used for decoding, corresponding to a specific illumination wavelength. On the trail, the standard red or blue images for the object were non-decodable due to the strong background 35 yielding strong barcode modulation, but also due to the presence of a sale date at the top of the barcode. However, the resulting image can clearly improve the situation and can eventually render the decodable image data. An additional scenario might consider black ink on translucent PET covers in sunlight, which can be problematic due to the solar spectrum that extends from deep UV to far IR. Since the devices employed are sensitive in a portion of these spectra, the device may be sensitive to that sunlight, which potentially affects the quality of the bar code reproduced in the image. Therefore, these types of covers (object) suffer from sunlight, in which PET covers can be partially transparent to IR irradiation. Thus, when exposed to strong sunlight, the barcode dots reproduced in the image will merge with the bright background, making decoding impossible. This technique can also help in this case: since the sunlight felt in a certain time frame is the same with or without a lighting device, two images can be acquired, one with blue light, for example, and another without additional lighting (for example, example, sunlight only). The difference between them allows one to efficiently remove the background and leave only the fluorescent dotted lines of the barcode in the image. This means that the pair of images to be used for this version would be blue and without additional lighting (image acquired by the device with the same exposure time and light intensity set to 0). Although no additional lighting is employed, it is still lighting with another wavelength, in the sense of the disclosed modalities, due to the fact that the sun provides a spectrum with several wavelengths. The use of this technique can be combined with an automatic adjustment of the exposure times, until images are found where the sunlight felt does not merge with the lines / points of the bar code. Once the right exposure time is reached, images can be produced, which allow you to use this technique for decoding in strong sunlight. The use of this technique allows to increase the resistance to sunlight. Another scenario may consider black paint on PET coverings with complex black and white backgrounds. Such backgrounds are "complex" due to the modulation of ink produced by the background exchange and because the black structures are comparable in size to the dots (barcode elements). In such cases, acquiring a blue and IR image and applying this technique can help improve decoding performance, as it helps to remove the background, even if the modulation is still present. Another scenario may consider invisible ink on labels with a strongly modulated background, where the technique can also help. In this example, a bar code can be heavily modulated due to the background change. This is due to deliberate printing outside the stopped zone, and it can be difficult to decode the barcode. However, when acquiring the IR image and applying the proposed technique, the background can be almost completely removed, so that the modulation is still present, but its ability to be decoded is significantly improved. Another scenario may consider invisible ink in can bottoms with 10 complex bottoms. Barcodes of invisible ink on can bottoms can render images when scanned, which is sometimes difficult to decode. This is due to the reflections that come from the bottom, the special texture of that bottom, with grooves that maximize the reflections in certain orientations, and also due to the presence of characters in the relief of the bottom of the can. As can be seen, the red image that should normally be used for decoding has a complex background with some access points that yield barcode modulation. However, the result of applying this technique to the background and using the appropriate linear combination generates a final image with excellent contrast and easy to decode. In this case, the red image can be acquired 20 using different parameters than for the colors blue and green. This is to produce images that have a comparable background from the beginning of the calculation. Although detailed modalities have been described, they serve only to provide a better understanding of the invention defined by the independent claims and should not be seen as limiting.
权利要求:
Claims (17) [0001] 1. Method for identifying a mono or two-dimensional bar code in input image data, the method characterized by the fact that it comprises the steps of: - obtaining (701) first image data of a first image of an object (21 ), said first image being acquired from said object (21) when using a first illumination wavelength; - obtaining (702) second image data from a second image of said object (21), said second image being acquired from said object (21) when using a second illumination wavelength being different from said first length lighting wave; - calculate (703) a weighting factor based on a statistical process of pixel values of the first image data and pixel values of the second image data; and - generating (704) third image data when calculating a weighted combination using the pixel values of said first image data, the pixel values of said second image data and said weighting factor. [0002] 2. Method according to claim 1, characterized by the fact that said weighting factor is calculated to approximate the pixel values of said second image data to the pixel values in said first image data in a background area of image excluding that barcode. [0003] Method according to claim 1 or 2, characterized by the fact that said statistical processing includes calculating a sum of pixel values of the first image data and a sum of pixel values of the second image data. [0004] 4. Method according to claim 3, characterized by the fact that said weighting factor is calculated as a ratio of said sum of pixel values of the first image data and of said sum of pixel values of the second image data . [0005] 5. Method according to claim 1 or 2, characterized by the fact that said statistical processing includes calculating any of a main value, average value, extreme value, minimum value, maximum value, standard derivation value and a value of linear regression. [0006] 6. Method according to any one of claims 1 to 5, characterized in that different weighting factors are calculated for different regions of the image data, and said generation (704) uses said different weighting factors to calculate the said weighted combination. [0007] Method according to any one of claims 1 to 6, characterized in that it additionally includes generating a weighting matrix, each element of the weighting matrix being calculated based on a pixel value of said first image data and a corresponding pixel value of the second image data, and in which to calculate the weighted combination, it also uses said weighting matrix. [0008] Method according to any one of claims 1 to 7, characterized by the fact that the pixel values subject to said statistical processing are pixel values within a selected region of the first and second image data. [0009] Method according to any one of claims 1 to 8, characterized by the fact that the pixel values subject to said statistical processing are pixel values selected with respect to a fraction of histogram. [0010] Method according to any one of claims 1 to 9, characterized in that said first image data and said second image data are selected from a plurality of image data, each image data of said plurality being acquired when using a different illumination wavelength. [0011] Method according to claim 10, characterized in that said first image data and said second image data are selected based on their histogram properties. [0012] Method according to any one of claims 1 to 11, characterized in that it additionally comprises (703) fourth image data based on the multiplication of said third image data by means of a scale factor. [0013] Method according to any one of claims 1 to 12, characterized in that said weighted combination is a linear weighted combination. [0014] 14. Barcode reading device (100, 100 ', 100' '), characterized by the fact that it comprises processing resources (110) configured to identify a mono or two-dimensional bar code in input image data, including : - obtaining first image data of a first image of an object (21), said first image being acquired from said object (21) using a first illumination wavelength; - obtaining second image data from a second image of said object (21), said second image being acquired from said object (21) using a second illumination wavelength being different from said first illumination wavelength; - calculate a weighting factor based on a statistical processing of pixel values of the first image data and pixel values of the second image data; and - generating third image data when calculating a weighted combination using the pixel values of said first image data, the pixel values of said second image data and said weighting factor. [0015] 15. Barcode reading device (100, 100 ', 100' ') according to claim 14, characterized by the fact that the processing resources (110) are additionally configured to implement a method defined in any one of claims 2 to 13. [0016] 16. Barcode reading device (100, 100 ', 100' ') according to claim 14 or 15, characterized by the fact that it additionally comprises image means (131) and means of illumination to acquire the first and second images. [0017] 17. Computer-readable non-transient medium, characterized by the fact that it comprises a code, said code, when executed in a processing resource, implements a method defined in any one of claims 1 to 13.
类似技术:
公开号 | 公开日 | 专利标题 BR112013012955B1|2020-12-29|METHOD FOR IDENTIFYING A MONO OR BIDIMENSIONAL BAR CODE IN ENTRY IMAGE DATA, BAR CODE READING DEVICE AND NON-TRANSITIONAL MEDIA READABLE BY COMPUTER US10410309B2|2019-09-10|Classification and authentication of identification documents using a convolutional neural network US20190171900A1|2019-06-06|Iterative recognition-guided thresholding and data extraction US10113910B2|2018-10-30|Sensor-synchronized spectrally-structured-light imaging JP2018195293A|2018-12-06|Image processing system, method for performing multi-label meaning edge detection in image, and non-transitory computer-readable storage medium Bai et al.2015|Subset based deep learning for RGB-D object recognition US20170299435A1|2017-10-19|Sensor-synchronized spectrally-structured-light imaging CN104915972B|2018-02-13|Image processing apparatus, image processing method and program TW201931179A|2019-08-01|Systems and methods for virtual facial makeup removal and simulation, fast facial detection and landmark tracking, reduction in input video lag and shaking, and a method for recommending makeup US20170200247A1|2017-07-13|Systems and methods for authentication of physical features on identification documents EP3659070A1|2020-06-03|Method for authenticating an illustration Sidorov2020|Artificial color constancy via GoogleNet with angular loss function JP3905503B2|2007-04-18|Face image composition device and face image composition program Hollaus et al.2019|CNN based binarization of MultiSpectral document images JP5573209B2|2014-08-20|Image processing apparatus, image processing method, program, and electronic apparatus Hedjam et al.2013|Ground-truth estimation in multispectral representation space: Application to degraded document image binarization US20100165195A1|2010-07-01|System and method for multiplexed measurements JP2012150730A|2012-08-09|Feature extraction device, feature extraction method, feature extraction program and image processing device Ganguly et al.2017|3d image acquisition and analysis of range face images for registration and recognition Seal et al.2013|Thermal human face recognition based on haar wavelet transform and series matching technique KR101796551B1|2017-11-15|Speedy calculation method and system of depth information strong against variable illumination US9836634B2|2017-12-05|Ultraviolet fluorescent barcode reading method and device JP2016151978A|2016-08-22|Image processing apparatus and image processing program WO2020251380A1|2020-12-17|Method for validation of authenticity of an image present in an object, object with increased security level and method for preparation thereof, computer equipment, computer program and appropriate reading means BR112013011519B1|2021-06-15|METHOD OF DETECTION OF A TWO-DIMENSIONAL BARCODE IN PICTURE DATA, METHOD OF CONSTRUCTION OF A DECISION TREE, METHOD OF DECODING A TWO-DIMENSIONAL BARCODE IN PICTURE DATA, COMPUTER AND DEVICE-READABLE NON- TRANSIENTAL MEDIUM
同族专利:
公开号 | 公开日 US9495572B2|2016-11-15| EP3000073B1|2017-07-12| EP3000073A1|2016-03-30| WO2014187474A1|2014-11-27| BR112013012955A2|2016-08-23| BR112013012955B8|2021-06-15| US20160098585A1|2016-04-07|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US7206433B2|2002-12-30|2007-04-17|Pitney Bowes Inc.|Method for printing high information density machine-readable composite images| US8800874B2|2009-02-20|2014-08-12|Datalogic ADC, Inc.|Systems and methods of optical code reading using a color imager| US8505823B2|2010-06-30|2013-08-13|International Business Machine Corporation|Noise removal from color barcode images| US9483677B2|2010-09-20|2016-11-01|Hid Global Corporation|Machine-readable symbols| US8500023B2|2011-02-24|2013-08-06|Psion Inc.|System and method for providing sufficient illumination quality for barcodes captured with a color image sensor|CN106156674B|2015-03-31|2019-03-08|联想有限公司|A kind of recognition methods and electronic equipment| WO2018001942A1|2016-06-30|2018-01-04|Sicpa Holding Sa|Systems, methods, and computer programs for imaging an object and generating a measure of authenticity of the object| EP3479364B1|2016-06-30|2020-04-29|Sicpa Holding SA|Systems and methods for generating a measure of authenticity of an object| CN106326801B|2016-09-22|2019-02-05|华中科技大学|A kind of scan method of stereoscopic two-dimensional code| US10534948B1|2019-03-18|2020-01-14|Capital One Services, Llc|Optimizing detection of images in relation to targets based on colorspace transformation techniques| US10496862B1|2019-03-18|2019-12-03|Capital One Services, Llc|Detection of images in relation to targets based on colorspace transformation techniques and utilizing ultraviolet light| US10509991B1|2019-03-18|2019-12-17|Capital One Services, Llc|Detection of images in relation to targets based on colorspace transformation techniques and utilizing infrared light| US10496911B1|2019-03-18|2019-12-03|Capital One Services, Llc|Detection of images in relation to targets based on colorspace transformation techniques and utilizing ultraviolet and infrared light| US10523420B1|2019-04-18|2019-12-31|Capital One Services, Llc|Transmitting encoded data along transmission mediums based on colorspace schemes| US10504013B1|2019-04-24|2019-12-10|Capital One Services, Llc|Colorspace encoding multimedia data on a physical page| US10529300B1|2019-06-20|2020-01-07|Capital One Services, Llc|Adaptive image display based on colorspace conversions| US10614635B1|2019-07-25|2020-04-07|Capital One Services, Llc|Augmented reality system with color-based fiducial marker| US10833852B1|2019-10-03|2020-11-10|Capital One Services, Llc|Encoded data along tape based on colorspace schemes| US10715183B1|2019-10-25|2020-07-14|Capital One Services, Llc|Data encoding with error-correcting code pursuant to colorspace schemes| US10867226B1|2019-11-04|2020-12-15|Capital One Services, Llc|Programmable logic array and colorspace conversions| US10762371B1|2019-11-14|2020-09-01|Capital One Services, Llc|Object detection techniques using colorspace conversions| US10878600B1|2019-12-10|2020-12-29|Capital One Services, Llc|Augmented reality system with color-based fiducial marker utilizing local adaptive technology|
法律状态:
2018-12-04| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]| 2020-06-02| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]| 2020-10-13| B09A| Decision: intention to grant [chapter 9.1 patent gazette]| 2020-12-29| B16A| Patent or certificate of addition of invention granted|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 21/05/2013, OBSERVADAS AS CONDICOES LEGAIS. | 2021-06-15| B16C| Correction of notification of the grant|Free format text: REF. RPI 2608 DE 29/12/2020 QUANTO AO INVENTOR. |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 PCT/EP2013/060384|WO2014187474A1|2013-05-21|2013-05-21|Identifying one- or two-dimensional bar codes via weighted image data combination| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|