专利摘要:
IMAGE PROCESSING DEVICE AND METHOD, AND, PROGRAM An image processing method and apparatus are disclosed, which make it possible to prevent the effect of a filtering process from being reduced by local control of the filtering process in encoding and decoding. Based on the system specification information managed by a system specification management unit (141), a border control flag management unit (132) of a control information generation audio (112) generates a flag border control. According to the value of the border control flag, a control unit (171) of an adaptive filtering process unit (113) determines a way in which the filtering process is applied to pixels located in the vicinity of a border of a piece of image. For example, the control unit (171) selects whether to apply the filtering process within the image piece. The apparatus and method disclosed herein are applicable to, for example, an image processing apparatus.
公开号:BR112012001639B1
申请号:R112012001639-3
申请日:2010-07-23
公开日:2021-01-26
发明作者:Kenji Kondo
申请人:Velos Media International Limited;
IPC主号:
专利说明:

[0001] The present invention relates to an image processing device and method, and specifically refers to an image processing device and method that allows suppression of the deterioration of filter processing effects due to local control of filter processing when coding or when decoding. Fundamentals of Technique
[0002] In recent years, devices of widespread use have appeared, consistent with formats such as MPEG (Group of Experts in Motion Image) or similar, which treat image information as digital signals, and take advantage of peculiar redundancy for image information in order to carry out transmission and storage of highly effective information at that moment, to compress an image through orthogonal transform such as discrete cosine transform or the like and motion compensation, as both information distribution such as transmission by diffusion and reception of information reception in households in general.
[0003] In particular, MPEG2 (ISO (International Organization for Standardization) / IEC (International Electrotechnical Commission) 13818-2) is defined as a general purpose image encoding format, and is a standard encompassing both of merged raster images and images sequential scanning, and image resolution images and high definition images. For example, MPEG2 has now been widely used in a wide range of applications for professional and consumer use. Using the MPEG2 compression format, an amount of code (bit rate) of 4 to 8 Mbps is allocated in the event of a standard resolution interleaved scan image having 720 x 480 pixels, for example. In addition, employing the MPEG compression format, an amount of code (bit rate) of 18 to 22 Mbps is allocated in the event of a high resolution interleaved scan image having 1920 x 1088 pixels, for example, and through In addition, a high compression ratio and excellent image quality can be achieved.
[0004] With MPEG2, high image quality encoding adapted for use in broadcast transmission is primarily considered as an object, but a lower amount of code (bit rate) than the amount of MPEG1 code, ie, an encoding format having a higher compression rate is not treated. According to the disclosure of personal digital assistants, it has been expected that needs for such an encoding format will be increased from now on, and in response to this, standardization of the MPEG encoding format has been realized. Regarding an image encoding format, its specification was confirmed as an international standard such as ISO / IEC 14496-2 in December 1998.
[0005] Additionally, in recent years, standardization of a standard called H.26L (ITU-T (ITU Telecommunications Standardization Sector) Q6 / 16 VCEG (Video Coding Expert Group)) has progressed, originally intended for image coding for use in videoconferencing. With H.26L, it has been known that when compared to a conventional encoding format such as MPEG2 or MPEG4, although a greater amount of computation is required for encoding and decoding it, greater encoding efficiency is achieved. In addition, currently, as part of the MPEG4 activity, standardization to also take advantage of functions not supported by H.26L with this H.26L taken as a basis, to achieve greater coding efficiency, was performed as a Coding Articulation Model Enhanced Compression Video. As a standardization schedule, H.264 and MPEG-4 Part10 (AVC (Advanced Video Encoding)) becomes an international standard in March 2003.
[0006] In addition, there is an adaptive loop filter (ALF (Adaptive loop filter)) as a next generation video encoding technique that is being considered as recent (see NPL 1, for example). According to this adaptive loop filter, optimum filter processing is performed on each frame, and block noise that has not been completely removed in the unlock filter, and noise due to quantization, can be reduced.
[0007] However, images in general have several features, so optimal filter coefficients are locally different. With the method in NPL 1, the same filter coefficient is applied to all pixels within a frame, so the image quality of the frame as a whole improves, but there has been a concern that there may be local deterioration.
[0008] Consequently, it has been designed not to perform filter processing in regions that deteriorate locally (see NPL 2 and NPL 3, for example). In this case, the image coding device corresponds to multiple matrix control blocks with no gaps between them as if they were being used for paving, with regions of the image, and controls whether or not to perform filter processing on an image to each control block. The image encoding device sets up flag information for each block, and performs adaptive filter processing according to the flag information. In the same way, the image decoding device also performs adaptive filter processing according to the flag information. Citation List Non-Patent Literature
[0009] NPL 1: Yi-Jen Chiu and L. Xu, "Adaptive (Wiener) Filter for Video Compression," ITU-T SG16 Contribution, C437, Geneva, April 2008. NPL 2: Takeshi. Chujoh, et al., "Block-based Adaptive Loop Filter" ITU-T SG16 Q6 VCEG Contribution, AI18, Germany, July 2008 NPL 3: T. Chujoh, N. Wada and G. Yasuda, "Quadtree-based Adaptive Loop Filter", ITU-T SG16 Q6 VCEG Contribution, VCEG-AK22 (r1), Japan, April 2009 Summary of the invention Technical problem
[0010] However, there is a method in which a frame is divided into multiple slices (slices), and encoding processing and image decoding processing are performed for each such slice (multiple slice). NPL 2 and NPL 3 make no mention of processing pixels close to the borders of the slices in such a multiple slice case, and how this should be processed has not been clear.
[0011] The present invention was proposed in light of this situation, and is an object of the same to suppress deterioration of the effects of filter processing due to the local control of filter processing when encoding or when decoding. Solution to the problem
[0012] An aspect of the present invention is an image processing device including: determination means configured to determine whether or not, in the pixels surrounding a pixel to be processed by filter processing locally performed on an image, pixels from a neighboring slice a slice in which the pixel to be processed is included; selection means configured to select, from a plurality of methods, a method for filter processing to be carried out on the pixel to be processed, based on a border control flag, in the event that determination was made by the means determining that a pixel from the neighboring slice is included in the surrounding pixels; and filter processing means configured to perform filter processing for the pixel to be processed with the method selected by the selection means.
[0013] The selection means can select one of, a method for effecting the filter processing on the pixel to be processed after the surrounding pixels located on the neighboring slice has been obtained, and a method for effecting the filter processing on the pixel to be processed generating data filling the surrounding pixels located in the neighboring slice by duplicating the surrounding pixels located in the slice including the pixel to be processed.
[0014] The selection means can select one of, a method to perform filter processing on the pixel to be processed after the surrounding pixels located on the neighboring slice have been obtained, and a method to omit performing filter processing on the pixel to be processed.
[0015] Image processing device can also include: generation means configured to generate the border control flag based on the system specifications; with the selection means selecting a filter processing method for the pixel to be processed, based on the border control flag generated by the generation means.
[0016] System specifications may include hardware features of the image processing device.
[0017] System specifications may include the purpose of using the image processing device.
[0018] Image processing device may also include: encoding means configured to encode the image and generate encoded data; with the encoding means still encoding the border control flag generated by the generation means, and adding to the encoded data.
[0019] The image processing device may also include: decoding means configured to decode encoded data from the image having been encoded, and to generate the image; with the decoding means still decoding the encoded border control flag that has been added to the encoded data; and the selection means selecting a method for filter processing as to the pixel to be processed, based on the border control flag decoded by the decoding means.
[0020] An aspect of the present invention is also an image processing method where the means of determining an image processing device determines whether or not they are included in the pixels surrounding a pixel to be processed by filter processing locally performed on an image , pixels of a slice next to a slice in which the pixel to be processed is included, means of selecting the image processing device select, from a plurality of methods, a method for processing the filter to be carried out on the pixel to be processed, based on a border control flag, in the event that a determination was made that a pixel from the neighboring slice is included in the surrounding pixels, and filter processing means of the image processing device perform the filter processing as to the pixel to be processed with the method that was selected.
[0021] With an aspect of the present invention, determination is made considering whether or not, in the pixels surrounding a pixel to be processed by filter processing locally performed on an image, pixels from a slice next to a slice in which the pixel to be processed is included, a method for filter processing to be carried out on the pixel to be processed is selected from a plurality of methods, based on a border control flag, in the event that determination was made that a pixel of the neighboring slice is included in the surrounding pixels, and filter processing is performed for the pixel to be processed using the method that has been selected. Advantageous Effects of the Invention
[0022] According to the present invention, an image can be encoded or decoded. In particular, deterioration of filter processing effects due to local control of filter processing when encoding or decoding can be suppressed. For example, deterioration in the effects of filter processing can be suppressed even in the case of encoding or decoding with each frame of an image divided into a plurality. Brief Description of the Drawings
[0023] [Fig. 1] Fig. 1 is a block diagram illustrating a configuration of an embodiment of an image encoding device to which the present invention has been applied. [Fig. 2] Fig. 2 is a diagram describing compensation processing / prognosis of block movement of variable size. [Fig. 3] Fig. 3 is a block diagram illustrating a primary example of setting up a control information generation unit. [Fig. 4] Fig. 4 is a diagram describing ALF block and filter block flags. [Fig. 5] Fig. 5 is a diagram describing an example of a multiple slice. [Fig. 6] Fig. 6 is a diagram describing surrounding pixels used for filter processing. [Fig. 7] Fig. 7 is a diagram describing the way in which filter processing is carried out near a border. [Fig. 8] Fig. 8 is a block diagram illustrating a primary example of configuring an adaptive filter processing unit. [Fig. 9] Fig. 9 is a flowchart describing an example of the coding processing flow. [Fig. 10] Fig. 10 is a flowchart describing an example of the processing flow for generating control information. [Fig. 11] Fig. 11 is a flowchart describing an example of the border control flag configuration processing flow. [Fig. 12] Fig. 12 is a flow chart describing an example of the flow of adaptive filter control processing. [Fig. 13] Fig. 13 is a flow chart describing an example of the filter processing flow. [Fig. 14] Fig. 14 is a flowchart describing an example of the filter processing flow. [Fig. 15] Fig. 15 is a block diagram illustrating a primary example of configuring an image decoding device to which the present invention has been applied. [Fig. 16] Fig. 16 is a flowchart describing an example of the decoding processing flow. [Fig. 17] Fig. 17 is a block diagram illustrating another example of configuring an image encoding device to which the present invention has been applied. [Fig. 18] Fig. 18 is a block diagram illustrating another example of configuring an image decoding device to which the present invention has been applied. [Fig. 19] Fig. 19 is a flow chart describing an example of the processing flow for exchanging specification information. [Fig. 20] Fig. 20 is a diagram describing another example of ALF blocks and filter block flags. [Fig. 21] Fig. 21 is a diagram describing another example of ALF blocks and filter block flags. [Fig. 22] Fig. 22 is a diagram describing the way the processing is carried out in the case of multiple slices. [Fig. 23] Fig. 23 is a block diagram illustrating a primary example of configuring a personal computer to which the present invention has been applied. [Fig. 24] Fig. 24 is a block diagram illustrating a main example of configuring a television receiver to which the present invention has been applied. [Fig. 25] Fig. 25 is a block diagram illustrating a main example of a cell phone configuration for which the present invention has been applied. [Fig. 26] Fig. 26 is a block diagram illustrating a main example of configuring a hard disk recorder to which the present invention has been applied. [Fig. 27] Fig. 27 is a block diagram illustrating a main example of configuring a camera to which the present invention has been applied. [Fig. 28] Fig. 28 is a diagram illustrating an example of macro blocks. Description of Modalities
[0024] 1. Primeira Modalidade (dispositivo de codificação de imagem) 2. Segunda Modalidade (dispositivo de decodificação de imagem) 3. Terceira modalidade (sistema de codificação / decodificação de imagem) 4. Quarta Modalidade (QALF) 5. Quinta Modalidade (computador pessoal) 6. Sexta Modalidade (receptor de televisão) 7. Sétima Modalidade (telefone celular) 8. Oitava Modalidade (gravador de disco rígido) 9. Nona Modalidade (câmera) Hereinafter, modalities of the present invention will be described. Note which description will proceed in the following order. 1. First Mode (image encoding device) 2. Second Mode (image decoding device) 3. Third modality (image encoding / decoding system) 4. Fourth modality (QALF) 5. Fifth Mode (personal computer) 6. Sixth Mode (television receiver) 7. Seventh Mode (cell phone) 8. Eighth Mode (hard disk recorder) 9. Ninth Mode (camera)
[0025] Fig. 1 represents a configuration of an embodiment of an image encoding device serving as an image processing device to which the present invention has been applied.
[0026] An image encoding device 100 shown in Fig. 1 is an image encoding device that submits an image to compression encoding using, for example, the H.264 and MPEG-4 Part10 (Advanced Video Encoding) format (hereinafter onwards, written as H.264 / AVC), and still employs an adaptive loop filter.
[0027] With the example in Fig. 1, the image encoding device 100 has an A / D (Analog / Digital) conversion unit 101, a screen rearrangement buffer 102, a computing unit 103, a orthogonal transform 104, a quantization unit 105, a lossless encoding unit 106, and a buffer storage area 107. The image encoding device 100 also has a reverse quantization unit 108, an inverse orthogonal transform unit 109, a computing unit 110, and an unlocking filter 111. Additionally, the image encoding device 100 has a control information generation unit 112, an adaptive filter processing unit 113, and frame memory 114. In addition , the image encoding device 100 has an intraprediction unit 115, a motion compensation unit 116, a motion prognosis unit 117, and a motion selection unit. prognostic image 118. In addition, the image encoding device 100 has a rate control unit 119.
[0028] The A / D conversion unit 101 performs A / D conversion of an input image, and outputs it to the screen rearrangement temporary storage area 102 and stores. The screen rearrangement temporary storage area 102 rearranges the frame images in the stored order to display in the frame order for encoding according to GOP (Image Group). Computing unit 103 subtracts from the image read from the screen rearrangement temporary storage area 102 the prognosis image of the intraprediction unit 115 selected by the prognostic image selection unit 118 or the prognosis image of the motion compensation unit 116 , and output difference information from them to orthogonal transform unit 104. Orthogonal transform unit 104 submits difference information from computing unit 103 to orthogonal transform, such as discrete cosine transform, Karhunen-Loéve transform , or the like, and emits the same transform coefficient. The quantization unit 105 quantizes the transformation coefficient that the orthogonal transform unit 104 emits.
[0029] The transformation coefficient to which quantization has been applied, that is, the output of the quantization unit 105 is entered into the lossless coding unit 106, where it is subjected to lossless coding, such as variable length coding, arithmetic coding, or the like, and compressed.
[0030] The lossless coding unit 106 obtains information indicating intraprediction and so on from the intraprediction unit 115, and obtains information indicating an interpretation mode mode, and so on from the movement prognosis unit 117. Note that information indicating intraprediction will also be referred to as intraprediction mode information hereinafter. In addition, information indicating interpretation will also be referred to as interpretation mode information hereinafter.
[0031] The lossless coding unit 106 obtains adaptive filter processing control information carried out on the adaptive filter processing unit 113 of the control information generation unit 112.
[0032] The lossless coding unit 106 encodes the transformation coefficient to which quantization has been applied, and also encodes the adaptive filter processing control information, the information indicating intraprediction, the information indicating a mode of interpretation mode, quantization parameters, and so on, and considers these as parts of the header information in the compressed (multiplexed) image. Lossless encryption unit 106 supplies the encrypted data to the temporary storage area 107 for storage.
[0033] For example, with lossless coding unit 106, lossless coding processing, such as variable length coding, arithmetic coding, or the like, is performed. Examples of variable length encoding include CAVLC (Adaptive Context Variable Length Encoding) determined by the H.264 / AVC format. Examples of arithmetic coding include CABAC (Adaptive Context Binary Arithmetic Coding).
[0034] The temporary storage area 107 temporarily holds the data provided from the lossless encryption unit 106, and at a predetermined time sends it to, for example, a storage device or transmission path or similar descent flow. shown in the drawing, as a compressed image encoded by the H.264 / AVC format.
[0035] In addition, the transformation coefficient to which quantization applied from the quantization unit 105 is applied is also entered for the inverse quantization unit 108. The inverse quantization unit 108 performs inverse quantization of the transformation coefficient to which quantization was applied with a method corresponding to the quantization in the quantization unit 105, and supplies the transformation coefficient obtained for the inverse orthogonal transform unit 109.
[0036] The inverse orthogonal transform unit 109 performs the inverse orthogonal transform of the transform coefficients provided with a method corresponding to the orthogonal transform processing by the orthogonal transform unit 104. The output submitted to the inverse orthogonal transform is supplied to the computing unit provided from the prognostic image selection unit 118 for the result of the inverse orthogonal transform result provided from the inverse orthogonal transform unit 109, ie, the difference information restored, and obtains a locally decoded image (decoded image). The results of adding them are provided for the release filter 111.
[0037] The unlock filter 111 removes block noise from the decoded image. The unblocking filter 111 then supplies the results of the removal of noise removal to the control information generating unit 112 and to the adaptive filter processing unit 113.
[0038] The control information generation unit 112 obtains the decoded image provided from the unlock filter 111 and the current input image read from the screen rearrangement buffer 102 and generates from this control information for adaptive filtering to be performed on the adaptive filter processing unit 113. While details will be described later, the control information includes filter coefficients, block size, filter block flags, and border control flags and the like.
[0039] The control information generation unit 112 supplies the control information generated for the adaptive filter processing unit 113. Likewise, the control information generation unit 112 also supplies the control information generated for the coding unit without loss 106. As described above, the control information is subjected to lossless compression processing by the lossless encoding unit 106, and included in the compressed (multiplexed) image information. That is to say, the control information is sent to the image decoding device along with the image compression information.
[0040] The adaptive filter processing unit 113 performs filter processing on the decoded image provided from the unlocking filter 111, using the filter coefficients, block size specification, and filter block flags and the like, from the control information supplied from the control information generation unit 112. A Wiener filter (Wiener filter), for example, is used as this filter. Of course, a filter other than a Wiener filter can be used. The adaptive filter processing unit 113 supplies the results of the filter processing to frame memory 114, and stores it as a reference image.
[0041] Frame memory 114 outputs the stored reference image to motion compensation unit 116 and motion prognosis unit 117 at a predetermined time.
[0042] With this image encoding device 100, image I, image B, and image P from the screen rearrangement buffer 102 are provided to intraprediction unit 115 as an image to be subjected to intraprediction ( also referred to as intraprocessing), for example. In addition, image B and image P read from the screen rearrangement buffer 102 are provided to the motion compensation unit 117 as an image to be subjected to interpretation (also referred to as interprocessing).
[0043] The intraprediction unit 115 performs intraprediction processing of all candidate intraprediction modes based on the image to be submitted to the intraprediction read from the screen rearrangement temporary storage area 102, and on the reference image provided from the memory. table 114 to generate a prognostic image.
[0044] With the intraprediction unit 115, information related to the intraprediction mode applied to the current block / macro block is transmitted to the lossless coding unit 106, and is encoded as the part of the header information in the image compression information. With the H.264 image encoding format information, the 4 x 4 intraprediction mode, the 8 x 8 intraprediction mode, and the 16 x 16 intraprediction mode are defined for signals of light intensity per unit area, and also with respect to the color difference signals, a prognostic mode can be defined for each macro block, regardless of the luminous intensity signals per unit area signals. For the 4 x 4 intraprediction mode, an intraprediction mode is defined for each 4 x 4 blocks of light intensity per unit area. For the 8 x 8 intraprediction mode, an intraprediction mode is defined for each 8 x 8 blocks of light intensity per unit area. For the 16 x 16 intraprediction mode and color difference signals, an intraprediction mode is defined for each macro block.
[0045] The intraprediction unit 115 calculates a cost function value for the intraprediction mode where the prognostic image was generated, and selects the intraprediction mode where the calculated cost function value provides the minimum value, as the optimal intraprediction mode. The intraprediction unit 115 supplies the prognostic image generated in the optimal intraprediction mode for the prognostic image selection unit 118.
[0046] Regarding the image to be submitted for intercoding, the motion prognosis unit 117 obtains image information provided from the screen rearrangement storage area 102 (input image) and image information serving as the reference frame provided from frame memory 114 (decoded image), and calculates a motion vector. The motion prognosis unit 117 supplies motion vector information indicating the calculated motion vector for lossless coding unit 106. This motion vector information is subjected to lossless compression processing by lossless coding unit 106, and included in the image compression information. This is to say the motion vector information is sent to the image decoding device along with the image compression information.
[0047] In addition, the motion prediction unit 117 also supplies the motion vector information for the motion compensation unit 116.
[0048] The motion compensation unit 116 performs motion compensation processing according to the motion vector information provided from the motion prognosis unit 117, and generates interpretive image information. The motion compensation unit 116 supplies the prognostic image information generated for the prognostic image selection unit 118.
[0049] In the case of an image to perform intra coding, the prognostic image selection unit 118 supplies the output of the intraprediction unit 115 to the computing unit 103, and in the event of an image to perform intercoding, it supplies the output of the compensation unit of motion 116 for computing unit 103.
[0050] The rate control unit 119 controls the rate of quantization operations of the quantization unit 105 based on the compressed image stored in the temporary storage area 107, such that excess or insufficiency does not occur.
[0051] With MPEG (Motion Picture Expert Group) 2, the increments of motion compensation / prognosis processing are motion compensation blocks, and independent motion vector information can be maintained in each motion compensation block. The size of a motion compensation block is 16 x 16 pixels in the case of frame motion compensation mode, and in the case of field compensation mode it is 16 x 8 pixels for each of the first field and the second field.
[0052] On the other hand, with AVC (Advanced Video Encoding), a configured 16 x 16 pixel macro block, as shown on the top side in Fig. 2, can be divided into any of the 16 x 16, 16 x 8, 8 partitions x 16, or 8 x 8, with each maintaining independent motion vector information. In addition, as shown on the bottom side in Fig. 2, an 8 x 8 partition can be divided into any of the 8 x 8, 8 x 4, 4 x 8, or 4 x 4 subpartitions, with each holding independent movement vector information. Motion compensation / prognosis processing is performed with this block motion compensation as an increment.
[0053] Fig. 3 is a block diagram illustrating the primary example of configuration of the control information generation unit 112.
[0054] The control information generation unit 112 generates control information used in the adaptive filter (ALF (Adaptive loop filter)), which is a loop filter, made from the adaptive filter processing unit 113. The control information generation unit of Control 112 generates, as the control information, filter coefficients, ALF block size, filter block flags, and border control flags, for example.
[0055] The control information generation unit 112 has a filter coefficient calculation unit 131, a border control flag generation unit 132, and a block information generation unit 133.
[0056] The filter coefficient calculation unit 131 obtains a decoded image provided from the unlock filter 111 and current current input image read from the screen rearrangement buffer 102, and calculates an ALF filter coefficient for each frame.
[0057] The border control flag generation unit 132 generates a border control flag (alf_enable_in_slice_boundary) that controls how filter processing is to be performed for pixels near the slice border, of which a plurality is formed in the frame (specifies filter processing mt). Details will be described later.
[0058] The block information generation unit 133 determines the ALF block size based on the decoded image provided from the unlock filter 111 and the filter coefficients calculated by the filter coefficient calculation unit 131, and generates a block filter for each ALF block within the slice to be processed.
[0059] Now, description will be made with respect to the ALF block and filter block flag. Fig. 4 is a diagram for describing ALF blocks and filter block flags.
[0060] As described above, the adaptive filter has filter coefficients configured for each frame. That is to say, optimal filter processing is done in increments of frames. However, in general, picture images are not uniform in general, and have several characteristics locally. Therefore, optimal filter coefficients differ locally. Consequently, while filter processing using filter coefficients has determined that each frame as described above improves the image quality for the overall frame, it has been concerned that it will in fact deteriorate locally.
[0061] Consequently, BALF (Block Based on Adaptive Loop Filter) in which filter processing is not performed in regions where image quality deteriorates locally, has been designed.
[0062] The decoded image accompanying the unlock filter processing is shown in frame 151 in A in Fig. 4. As shown in B in Fig. 4, the block information generation unit 133 positions in multiple ALF 152 blocks in a matrix, which are control blocks serving as the control increment for locally made adaptive filter processing, without gaps as if they were used for paving the entire region of frame 151. The region where the ALF 152 blocks are placed does not have to be the same as the frame region 151, but includes at least the entire frame region. The region of table 151 is, as a result, divided into regions of the ALF blocks 152 (multiple regions).
[0063] The block information generation unit 133 determines a size in the horizontal direction (double-headed arrow 153) and a size in the vertical direction (double-headed arrow 154) of the ALF blocks 152. For the size of the ALF blocks, a 8 x 8, 16 x 16, 24 x 24, 32 x 32, 48 x 48, 64 x 64, 96 x 96, or 128 x 128, can be specified for each slice. The information specifying the ALF block size will be called the block size index.
[0064] Once the block size is decided, the number of ALF blocks per frame has also been decided, since the frame size is fixed.
[0065] As shown in C in Fig. 4, the block information generation unit 133 configures a filter block flag 155 that controls whether or not to perform filter processing on each ALF block 152. For example, a flag filter block 155 with a value of "1" is generated for a region where image quality is improved by the adaptive filter, and a filter block flag 155 with a value of "0" is generated for a region where the image quality is deteriorated by the adaptive filter. With the filter block flag 155, the value of "1" is a value indicating what filter processing is to be performed, and the value of "0" is a value indicating that filter processing is not to be performed.
[0066] The adaptive filter processing unit 113 controls adaptive filter processing based on the value of the filter block flag 155. For example, the adaptive filter processing unit 113 performs filter processing only in regions where ALF blocks 152 have a value of "1" for flag filter 155, and do not perform filter processing in regions where ALF blocks 152 have a value of "0" for flag filter 155.
[0067] In addition, the block size index and filter block flag described above are included in the slice header of the image compression information, and sent from the image encoding device 100 to the image decoding device. The one or more filter block flags corresponding to the number of ALF blocks that are included in the slice header in the bitwise scan order, for example.
[0068] Consequently, the smaller the size of the ALF block, the finer the filter control can be performed, and the more appropriate ALF filtering can be performed. However, a smaller ALF block size increases the bit quantity of the filter block flags. This is to say, the smaller the ALF block size, the more the coding efficiency of the image compression information decreases. Therefore, the capabilities of the adaptive filter and the coding efficiency of the image compression information are in a negotiating relationship.
[0069] The number of ALF blocks is calculated as with the following expression (1). [Mathematical Expression 1]
[0070] In Expression (1), NALFBLOCK represents the number of ALF blocks. In addition, Nmbw represents the number of macro blocks in the horizontal direction of the image, and NMBh represents the number of macro blocks in the vertical direction of the image. Additionally, Nsize represents the size of one side of an ALF block. In addition, floor [x] is a function where x is rounded to the decimal in order to be an integer.
[0071] Now, with H.264 / AVC, a frame can be divided into multiple slices, and image compression information can be output for each slice. Fig. 5 is a diagram to describe an example of a multiple slice. In the case of the example in Fig. 5, frame 151 is divided into three slices, slice 0, slice 1, and slice 2.
[0072] By emitting image compression information in slice increments thinner than quads, the image encoding device can generate and output image compression information at shorter intervals. That is to say, the image decoding device that decodes the image compression information can start decoding the image compression information at an earlier stage. That is to say, the delay time from an image being entered for coding processing and coding processing being carried out and an image being output can be shortened.
[0073] NPL 2 which describes BALF does not disclose this multiple slice. This is to say, only configuration of ALF blocks for the entire frame is described. However, in the case of multiple slices, there are cases where normal filter processing cannot be performed for pixels near the slice boundary.
[0074] Fig. 6 is a diagram illustrating the way in which filter processing is performed for pixels near the slice boundary. In the case of performing filter processing for the pixels to be processed, the adaptive filter processing unit 113 does this using pixels within a predetermined distance around the pixel to be processed (surrounding pixels). For example, in the case of Fig. 6, the adaptive filter processing unit 113 performs filter processing on a pixel to be processed 161 using 9 x 9 surrounding pixels 162 shown with shading.
[0075] However, as shown in Fig. 6, the pixel to be processed 161 is a pixel near the slice border 163. Now, the slice border 163 indicates the boundary between a slice to be currently processed (current slice) and a neighboring slice to the slice to be processed (neighboring slice). That is to say, the slice border 163 indicates the external frame of the current slice.
[0076] As shown in Fig. 6, surrounding pixels 162 used for filter processing of the pixel to be processed 161 near the slice border 163 partially crosses the slice border 163 and passes through the neighboring slice region. This is to say, in order to perform filter processing of the pixel to be processed 161 in the same manner as with a normal case, the pixel values of the neighboring slice are required, as shown in A in Fig. 7 for example.
[0077] In the case of the example in A in Fig. 7, the adaptive filter processing unit 113 performs filter processing for the pixel EE which is the pixel to be processed, using pixel AA to pixel JJ in both the current slice and the neighboring slice.
[0078] However, in order to do this, generation of the decoded image of the neighboring slice needs to be expected. Consequently, in this case, there has been a concern that the coding processing delay time would increase.
[0079] On the other hand, there is a method for generating and using padding data, as shown in B in Fig. 7, for example. In the case of the example in B in Fig. 7, the adaptive filter processing unit 113 duplicates pixel EA to pixel EJ adjacent to slice border 163, and thereby generates pixels within the neighboring slice for surrounding pixels 162 ( filling data). The adaptive filter processing unit 113 performs filter processing for the EE pixel using the generated padding data.
[0080] Therefore, the adaptive filter processing unit 113 does not need to wait for pixels from the neighboring slice to be generated, and filter processing of the EE pixel can be done at an earlier stage than with the case of A in Fig. 7.
[0081] This is to say, with the case of method A in Fig. 7 using the pixels of the neighboring slice, the adaptive filter processing unit 113 uses effective data, so filter processing can be performed more suitable for the contents of the effective image. . This is to say, great improvement in image quality due to filter processing can be expected.
[0082] On the other hand, in the case of the method in B in Fig. 7, the adaptive filter processing unit 113 does not require data from the adjacent slice for filter processing, and processing can be performed with the current slice data alone, thus processing filter can be performed at an earlier stage.
[0083] Which method is desirable differs depending on system specifications, user requests, and so on. For example, if the system emphasizes image quality, the method shown in A in Fig. 7, but the method in A in Fig. 7 consumes a larger amount of memory than the method in B in Fig. 7, and there is concern that the delay time will increase. Consequently, depending on the memory capacity that can be used with the system and the length of tolerable delay time, there may be causes where the method of B in Fig. 7 is most desirable.
[0084] The border control flag controls the filter processing method for such pixels near a border.
[0085] Returning to Fig. 3, the border control flag generation unit 132 generates such border control flags. The border control flag generation unit 132 has a system specification management unit 141, a determination unit 142, and a generation unit 143.
[0086] The system specification management unit 141 manages the system specifications by performing image processing (hardware features, purpose of use, etc.) including the image encoding device 100. For example, the system specification management unit 141 can be arranged to manage the specifications (hardware features, purpose of use, etc.) of the encoded image decoding device on the image encoding device 100.
[0087] The determination unit 142 determines whether or not the pixel to be processed is a pixel near the border. Generation unit 143 generates border control flags for the pixels to be processed that have been determined to be pixels near the border.
[0088] Fig. 8 is a block diagram illustrating a primary example of configuring the adaptive filter processing unit 113 in Fig. 1.
[0089] The adaptive filter processing unit 113 performs filter processing on the decoded image provided from the unlocking filter 111 using the control information provided from the control information generation unit 112.
[0090] As shown in Fig. 9, the adaptive filter processing unit 113 has a control unit 171, an adaptive filter 172, and a selection unit 173.
[0091] The control unit 171 controls the adaptive filter 172 and the selection unit 173. For example, the control unit 171 obtains control information from the control information generation unit 112, and controls the filter processing based on this. control information.
[0092] The adaptive filter 172 performs region filter processing on the decoded image provided from the unlock filter 111, specified as ALF blocks to be processed from the control unit 171, using a filter coefficient configured by the control unit 171.
[0093] The adaptive filter 172 has a temporary storage area 181, an adaptive slice filter 182, a first adaptive filter for border 183, and a second adaptive filter for border 184.
[0094] The temporary storage area 181 temporarily maintains a decoded image provided from the unlock filter 111. The temporary storage area 181 can hold not only the slice to be processed, but also the slice next to the slice to be processed (neighboring slice) .
[0095] The adaptive slice filter 182 performs filter processing for pixels to be processed that are not close to the slice boundary and considering that the pixels of the neighboring slice are not included in the surrounding pixels, under control of the control unit 171. This is for that is, the adaptive slice filter 182 performs filter processing using only pixels from the current slice.
[0096] The first adaptive filter for border 183 performs filter processing by passing slices in the pixels to be processed that are close to the slice border and considering that the pixels of the neighboring slice are included in the surrounding pixels, under control of the control unit 171. This is for that is, the first adaptive filter for border 183 performs filter processing using the pixels of the current slice and the neighboring slice, with a method as shown in A in Fig. 7. Consequently, the first adaptive filter for border 183 initiates filter processing. after pixels from the adjacent slice have accumulated in the temporary storage area 181.
[0097] The second adaptive filter for border 184 performs closed filter processing for the current slice, on the pixels to be processed that are close to the slice border and considering that pixels from the neighboring slice are included in the surrounding pixels, under control of the control unit 171. That is to say, the second adaptive filter for border 184 performs filter processing generating fill data as needed, using the pixels of the current slice alone, with a method as shown in B in Fig. 7. Consequently, the second adaptive filter for border 184 it starts filter processing when pixels of the current slice are accumulating in the temporary storage area 181.
[0098] Control unit 171 selects one from the adaptive slice filter 182, the first adaptive filter to border 183, and the second adaptive filter to border 184, following the position of the pixel to be processed and the value of the included border control flag in the control information, and causes the selected processing unit to perform filter processing with its own method.
[0099] In addition, the control unit 171 controls the filter processing time of the selected processing unit (adaptive slice filter 182, first adaptive filter for border 183, or second adaptive filter for border 184), according to the state accumulation of the image in the temporary storage area 181.
[0100] Adaptive filter 172 (slice adaptive filter 182, first adaptive filter for border 183, or second adaptive filter for border 184) supplies the results of the filter processing for selection unit 173.
[0101] Under control of control unit 171, selection unit 173 selects one from the decoded image provided from the unlock filter 111 (decoded image not subjected to adaptive filter processing) and from the decoded image supplied from adaptive filter 172 ( decoded image subjected to adaptive filter processing), supplies it to frame memory 114, and stores it as a reference image.
[0102] The control unit 171 controls the selection unit 173 by tracking the value of the filter block flag included in the control information to select one, the decoded image not subjected to adaptive filter processing and the decoded image subjected to adaptive filter processing.
[0103] That is to say, the adaptive filter processing unit 113 performs filter processing only for the region in the decoded image provided from the unlocking filter 111 considering what indication was made to perform filter processing by the filter block flag (region considering that it has been determined that image quality will be improved by filter processing). [Processing Flow]
[0104] In the following, the processing flow using the portions configured as described above will be described. First, an example of the download of the encoding processing performed by the image encoding device 100 will be described with reference to the flowchart in Fig. 9.
[0105] In step S101, the A / D conversion unit 101 converts an input image from analog to digital. In step S102, the screen rearrangement buffer 102 stores the converted A / D image, and rearranges it from the sequence to display the images for the sequence for encoding.
[0106] In step S103, computing unit 103 calculates the difference between an image rearranged by processing in step S102 and the prognosis image. The prognosis image is provided to the computing unit 103 from the movement compensation unit 116 in the event of interpreting, and from the intraprediction unit 115 in the event of performing intraprediction, via the prognostic image selection unit 118.
[0107] The data difference is less in the amount of data when compared to the original image data. Consequently, the amount of data can be compressed when compared to the case of encoding the original image without change.
[0108] In step S104, the orthogonal transform unit 104 submits the difference information generated by the processing in step S103 to orthogonal transform. Specifically, orthogonal transform, such as discrete cosine transform, Karhunen-Loéve transform, or the like, is performed, and a transformation coefficient is emitted. In step S105, the quantization unit 105 quantizes the transformation coefficient. At the time of this quantization, the rate is controlled as the processing described later in step S119 will be described.
[0109] The difference information thus being applied to quantization is locally decoded as follows. Specifically, in step S106, the inverse quantization unit 108 subjects the transformation coefficient to which quantization was applied by the quantization unit 105 to the inverse quantization using a property corresponding to the property of the quantization unit 105. In step S107, the transform unit inverse orthogonal 109 subjects the transformation coefficient, subjected to inverse quantization by the inverse quantization unit 108, to the inverse orthogonal transform using a property corresponding to the property of the orthogonal transform unit 104.
[0110] In step S108 the computing unit 110 adds a prognostic image input via the prognostic image selection unit 118 to the locally decoded difference information, and generates the locally decoded image (the image corresponding to the input to the computing unit 103 ). In step S109, the unlocking filter 111 submits the image emitted from the computing unit 110 for filtering. Therefore, block noise is removed.
[0111] When processing above for a slice, in step S110 the control information generation unit 112 generates control information to be used for adaptive filter processing. The details of the processing of control information generation will be described in detail later.
[0112] When control information such as filter coefficients, ALF block size, and filter block flag and the like being generated by processing in step S110, in step S111 the adaptive filter processing unit 113 performs adaptive filter processing in the decoded image submitted to the unlock filter processing in the processing of step S109. Details of this adaptive filter processing will be described later.
[0113] In step S112, frame memory 114 stores the image subjected to adaptive filter processing in step S111.
[0114] In step S113, the intraprediction unit 115 performs intraprediction processing in the intraprediction mode. In step S114, the motion prediction unit 117 and the motion compensation unit 116 perform motion compensation / prognosis processing in interpretation mode.
[0115] In step S115, the prognostic image selection unit 118 selects one from a prognostic image generated by intrapredictive processing or a prognostic image generated by motion compensation / interpreting processing, according to the prognosis mode of the frame a be processed. The prognostic image selection unit 118 supplies the selected prognostic image for computation units 103 and 110. This prognostic image is, as described above, used for calculations in step S S103 and S108.
[0116] In step S116, lossless coding unit 106 encodes the transformation coefficient to which quantization emitted from quantization unit 105 has been applied. Specifically, the difference image is subjected to lossless coding such as variable length coding, coding arithmetic, or the like, and compressed. At this time, the lossless coding unit 106 also encodes the control information generated in step S110, the intraprediction mode information for intraprediction processing in step S113, the interpreting mode for motion compensation / interpreting motion in step S114 , and so on.
[0117] In step S117, the lossless coding unit 106 incorporates (describes) self-explanatory data such as the coded control information and so on in the slice header. These self-explanatory data read and used for when performing image decoding. Including (multiplexing) the self-explanatory data needed for decoding processing in this way, performing decoding processing is allowed in finer increments than frame increments, and increased delay time can be suppressed.
[0118] In step S118, the temporary storage area 107 stores the difference image as a compressed image. The compressed image stored in the buffer 107 is read as appropriate and transmitted to the decoding side via the transmission path.
[0119] In step S119, the rate control unit 119 controls the rate of the quantization operation of the quantization unit 105, such that excess or insufficiency does not occur, based on the compressed image stored in the temporary storage area 107.
[0120] In the following, a description will be made of an example of the flow of control information generation processing performed by the control information generation unit 112 in step S110 in Fig. 10, it will be described with reference to the flowchart in Fig. 11.
[0121] When the control information generation process is being started, the filter coefficient calculation unit 131 of the control information generation unit 112 calculates a filter coefficient using the input image provided from the rearrangement buffer area. of screen 102 and the decoded image subjected to the unlock filter processing that is provided from the unlock filter 111. For example, the filter coefficient calculation unit 131 determines the value of the filter coefficient such that the image residue input and decoded image is the smallest.
[0122] When the filter coefficient being calculated, in step S132 the border control flag generation unit 132 generates a border control flag to control the adaptive filter processing method for the pixel near the border. Details will be described later.
[0123] When a boundary control flag is being generated, in step S133 the block information generation unit 133 generates block information including ALF block size and filter block flag. The ALF block size can be determined in advance, or it can be configured as appropriate according to the contents of the image. In this case, the block information generation unit 133 calculates a cost value by evaluating the results of the filter processing using a cost function, and determines the ALF block size such that the cost value is the lowest, for example .
[0124] In addition, the block information generating unit 133 determines the value of the filter block flag depending on whether the image quality is improved in the event for which filter processing is applied to the ALF block to be processed. For example, in the event of determining which image quality is improved by applying filter processing, the block information generation unit 133 sets the filter block flag value to "1" which indicates what filter processing is to be done, and in the event of determining which image quality deteriorates by applying filter processing, set the filter block flag value to "0" which indicates that filter processing is not to be performed.
[0125] When the block information is being generated, the flow returns to step S110 in Fig. 9, and processing from step S111 onwards is performed.
[0126] Note that the calculation of the filter coefficient performed in step S131 can be performed in frame increments. In this case, processing in step S131 can be performed only on a predetermined slice within the frame (eg, a slice where the identification number within the frame is a predetermined value (eg, "0" ), or a slice first processed within the frame, or the like), with this value used for the other slices. In addition, an arbitrary image can be used to calculate the filter coefficients. For example, calculation can be performed based on past frame images.
[0127] Below, an example of the flow of the border control flag configuration processing performed in step S132 in Fig. 10 will be described with reference to the flowchart in Fig. 11.
[0128] When processing the border control flag configuration being initiated, in step S151 the system specification management unit 141 of the border control flag generation unit 132 obtains system specification information.
[0129] This system specification information is information including the hardware features and purpose of use and so on of the system including the image encoding device 100, for example. Hardware features are hardware features of the devices configuring the system (including the image encoding device 100), and for example, include processing capabilities, usable memory capacity, transmission speed of the multipath cable (bus) , and so on. In addition, purpose of use is the mode of operation of the system as a whole or of individual devices, and includes, for example, whether it is to operate with an emphasis on image quality, whether it is to operate with an emphasis on speed, and so on. against. Of course, information other than this can be included in the system specification information.
[0130] This system specification information can be stored in advance in memory or the like embedded in a system specification management unit 141. In this case, the system specification management unit 141 reads the system specification information from memory by the processing in step S151. In addition, at the time of processing in step S151, the system specification management unit 141 can collect specification information as described above from the parts of the image encoding device 100 and other devices and so on.
[0131] When obtaining the system specification information, the system specification management unit 141 supplies it to the determination unit 142.
[0132] In step S152, the determination unit 142 determines whether or not to use the following slice for filter processing near the border, based on the system specification information provided (hardware resources, purpose of use, etc.). This is to say, in the event that a pixel near a border with the neighboring slice being included in the surrounding pixels is the pixel to be processed, the unit of determination 142 determines whether it is to perform filter processing across slices, or to perform closed filter processing on the current slice.
[0133] For example, in the event that increased delay time is tolerable, and there is sufficient memory capacity available on the image encoding device 100 and the image decoding device and the like, the determination unit 142 selects filter processing across slices . In addition, for example, in the event that an increase in the delay time is intolerable or there are not enough hardware resources in the system devices, the determination unit 142 selects closed filter processing on the current slice.
[0134] In the event that determination was made to use the next slice, i.e., to perform filter processing across slices, the flow proceeds to step S153. In step S153, generation unit 143 generates a border control flag with a value of "1".
[0135] In addition, in the event that a determination was made not to use the next slice, i.e., to perform closed filter processing on the current slice, the flow proceeds to step S154. In step S154, generation unit 143 generates a border control flag with a value of "0".
[0136] After generating the border control flag, generation unit 143 supplies it to the adaptive filter processing unit 113 and lossless coding unit 106. Lossless coding unit 106 encodes the provided border control flag from generation unit 143 as control information, and incorporates it into the slice header or the similar to the current slice. The adaptive filter processing unit 113 controls adaptive filter processing using the boundary control flag provided from generation unit 143.
[0137] When the processing of step S153 or step S154 is finished, the processing of the configuration of the border control flag ends, the flow returns to step S132 in Fig. 10, and processing of step S133 and below is carried out.
[0138] In the following, an example of the adaptive filter processing flow performed in step S111 in Fig. 9 will be described with reference to the flowchart in Fig. 12.
[0139] After the adaptive filter processing is started, in step S171 the temporary storage area 181 obtains the decoded image of the slice to be processed from the unlocking filter 111. After the slice to be processed is obtained, in step S172 the control 171 identifies the slice region to be processed.
[0140] In order to know the region of the current slice that is to be processed, this can be found by knowing the macro blocks included in the current slice, and knowing the pixels included in their macro blocks. The control unit 171 obtains the start address of the macro block of the current slice from the slice header.
[0141] The macro block start address is now a number assigned to the macro blocks in order of scanning bit by bit from the top left of the screen. As shown in Fig. 5, the macro block address in the upper left part of the image (frame 151) is 0. Slice 0 starts from the upper left part of frame 151, thus the macro block address of the initial macro block 156 -1 of slice 0 is 0. Following this order, the final macro block 156-2 or slice 0 is E0. In addition, in the same way as with this slice 0, the macro block address of the initial macro block 157-1 of slice 1 is S1, and the macro block address of the final macro block 157-2 is E1. In addition, the macro block address of the initial macro block 158-1 of slice 2 is S2, and the macro block address of the final macro block 158-2 is E2.
[0142] As the current slice is decoded, a macro block address is added each time the decoding processing of a macro block is completed, and eventually the final macro block of the current slice is reached. A flag indicating the final macro block of the slice is set in the final macro block. Because of this, all the addresses of the macro blocks that the current slice maintains can be known. This is to say, that is, from the starting address of the macro block to the ending address of the macro block.
[0143] Now, with a sequence parameter set (SPS (Sequence Parameter Set)) of an AVC sequence (image compression information), an image size of a frame is indicated by the number of macro blocks. Pic_height_in_map_units_minus1 indicates the number of macro blocks in the vertical direction of the image. Pic_width_in_mbs_minus1 indicates the number of macro blocks in the horizontal direction of the image.
[0144] Consequently, from the address of the macro block, the position of the macro block is expressed by the following expression (2) and Expression (3). mbx = macro block address% pic_width_in_mbs_minus1 ... (2) mby = floor [macro block address / pic_width_in_mbs_minus1] ... (3)
[0145] In Expression (2) and Expression (3), mbx indicates what number from the left the macro block is, and mby indicates what number from the top the macro block is. In addition, floor [z] indicates z being rounded in decimal to be an integer, and A% B indicates the remainder of having divided A with B.
[0146] If we say that the size of the macro block is determined to be 16 x 16 pixels, the position of vertical direction and horizontal direction of the pixel at the top left of the macro block is (16 x mbx, 16 x mby), and the pixels included in the macro block are pixels included in the range of 16 pixels for the bottom direction and 16 pixels for the right direction from the position of the top left pixel. Therefore, all pixels of the current slice can be known. This is to say, the region of the slice to be processed is identified.
[0147] In step S173, the control unit 171 obtains a filter block flag generated in the control information generating unit 112.
[0148] In step S174, control unit 171 determines one of the unprocessed ALF blocks to be the ALF block to be processed. The order of the ALF blocks is determined in advance, and is in common with the selection order in the control information generation unit 112. In addition, the control unit 171 identifies the region of the ALF block to be processed that was detected .
[0149] Since the image size of the frame is determined in advance, when the ALF block size being determined, the number of ALF blocks needed to pave the ALF blocks with the upper left part of the frame as the point of origin (number of frames). ALF blocks within the frame) can also be calculated. The setting values of the size in the vertical direction (number of pixels) and size in the horizontal direction (number of pixels) of the ALF blocks are provided in advance, so the control unit 171 determines the size of the ALF blocks and the number of blocks of ALF. ALF following the configuration values, and place the ALF blocks for the decoded image.
[0150] Note that the number of ALF blocks is calculated by the following expression (4) and Expression (5). num_alf_block_x = floor [(16 x (pic_width_in_mbs_minus1 + 1) + (alf_block_size -1)) / alf_block_size] ... (4) num_alf_block_y = floor [(16 x (pic_height_in_map_unidades_minus1 + 1) + (alf_block_size-1)) / alf_block_size] .. (5)
[0151] In Expression (4) and Expression (5), num_alf_block_x and num_alf_block_y are the horizontal and vertical ALF block numbers included in the image, respectively. In addition, alf_block_size represents the size of one side of an ALF block. To simplify the description here, we will say that the ALF blocks are square. Of course, an arrangement can be made where the size in the vertical direction and the size in the horizontal direction of the ALF blocks are different from each other.
[0152] The position of the i-th ALF block is expressed by the following expression (6) and Expression (7). alf_block_x = i% (num_alf_block_x -1) ... (6) alf_block_y = floor [i / (num_alf_block_x -1)] ... (7)
[0153] In Expression (6) and Expression (7), alf_block_x and alf_block_y each indicate what number in the horizontal and vertical direction the i-th ALF block is. The position of the upper left pixel of the i-th ALF block is the position obtained by multiplying each of, alf_block_x and alf_block_y by alf_block_size. This is to say, the horizontal direction is 16 x alf_block_x, and the vertical direction is 16 x alf_block_y. Consequently, the region of the i-th ALF block is a range of alf_block_size x alf_block_size from the upper left pixel of the same.
[0154] In step S175, the control unit 171 determines whether or not the slice region to be processed is included within the region of the ALF block to be processed that has been identified as described above. In the event that it is determined that the slice region to be processed is included, the flow advances to step S176.
[0155] In step S176, the control unit 171 determines whether or not the filter block flag value is 1. In the event that the filter block flag value is 1, and instruction was given to perform filter processing considering the ALF block to be processed, control is performed such that the selection unit 173 selects the filter output 172, and the flow advances to step S177. In step S177, the control unit 171 selects the pixels to be processed in a predetermined order such as, for example, in bit-by-bit scan order or the like, from the unprocessed pixels.
[0156] In step S178, control unit 171 determines whether or not pixels from the neighboring slice are or are not required for filter processing of the selected pixel to be processed. In the event that pixels from the neighboring slice are included in the pixels surrounding the pixel to be processed, and determination is made that the pixel to be processed is a pixel near the slice boundary, the flow advances to step S179.
[0157] In step S179, control unit 171 determines whether the value of the boundary control value included in the control information obtained by the control information generation unit 112 is "1" or not. In the event that it is determined that the value of the border control flag is "1", the flow advances to step S180.
[0158] In step S180, control unit 171 selects first adaptive filter for border 183 as the adaptive filter, and causes the first adaptive filter for border 183 to perform filter processing by slicing through as shown in A in Fig. 7. After processing from step S180 finishes, the flow advances to step S183.
[0159] In addition, in step S179, in the event that it is determined that the value of the border control flag is "0", the flow advances to step S181.
[0160] In step S181, control unit 171 selects the second adaptive filter for border 184 as the adaptive filter, and causes the second adaptive filter for border 184 to perform closed filter processing on the current slice as shown in B in Fig. 7. After the processing of step S181 is finished, the flow advances to step S183.
[0161] In addition, in the event that a determination is made in step S178 that the pixel to be processed is not a pixel near a slice boundary, the flow advances to step S182.
[0162] In step S182, control unit 171 selects the adaptive slice filter 182 as the adaptive filter, and causes the adaptive slice filter 182 to perform normal filter processing using only pixels from the current slice. After processing at step S182 is complete, the flow proceeds to step S183.
[0163] In addition, in the event that a determination is made in step S176 that the value of the filter block flag is "02, the flow advances to step S183. Additionally, in the event that determination is made in step S175 that the block of ALF to be processed does not include the slice region to be processed, the flow proceeds to step S183.
[0164] In step S183, control unit 171 determines whether all pixels within the ALF block to be processed have been processed. In the event that a determination is made that an unprocessed pixel exists, the flow returns to step S177 and subsequent processing is repeated.
[0165] In addition, in the event that determination is made in step S183 that all pixels within the ALF block to be processed have been processed, the flow advances to step S184.
[0166] In step S184, control unit 171 determines whether all ALF blocks within the frame have been processed. In the event that it is determined that an unprocessed ALF block exists, the flow returns to step S173 and the subsequent processing is repeated. In addition, in the event that determination in step S184 is made that all ALF blocks have been processed, adaptive control filter processing is terminated, the flow returns to step S111 in Fig. 9, and processing in step S122 and below are carried out.
[0167] Note that the filter processing for the pixels to be processed, carried out in step S180 to step S182 are each performed independently as different tasks from the processing of adaptive filter control. This is to say, at step S180 to step S182, after performing adaptive filter processing being specified, adaptive filter processing is performed as appropriate. That is to say, these adaptive filter processing are executable in parallel with the control adaptive filter processing and adaptive filter processing for the other pixels.
[0168] These filter processing will be described with reference to the flowcharts in Fig. 13 and Fig. 14. First, an example of the filter processing flow performed by the first adaptive filter for boundary 183 will be described with reference to the flowchart in Fig. 13.
[0169] After the execution is instructed to process the filter through slices, which execution is instructed in step S180 in Fig. 12, in step S201 the first adaptive filter for border 183 monitors the temporary storage area 181, determines whether all pixels in the region around the pixel to be processed are accumulated or not, and remain until accumulated. Pixels from the surrounding region (i.e., surrounding pixels) also include pixels from the neighboring slice. In the event that it is determined that all pixels are accumulated in the temporary storage area 181, the flow advances to step S202.
[0170] In step S202, the first adaptive filter for border 183 obtains pixels from the surrounding region (surrounding pixels) from the temporary storage area 181 and in step S203 it uses the surrounding pixels and the filter coefficient configured in the control unit 171 to perform filter processing of the pixel to be processed. After the filter processing is finished, the first adaptive filter for border 183 supplies the results of the filter processing to the selection unit 173, and filter processing ends.
[0171] In the event that the value of the filter block flag is "1", the selection unit 173 selects the results of the filter processing, and supplies to the frame memory 114 as the decoded image subjected to the filtering processing, in order to to be stored.
[0172] In the following, an example of the filter processing flow performed by the second border adaptive filter 184 will be described with reference to the flowchart in Fig. 14.
[0173] After execution is instructed to process the closed filter in the current slice, which execution is instructed in step S181 in Fig. 12, in step S221 the second adaptive filter for border 184 duplicates the surrounding pixels located in the current slice already kept in the storage area temporary 181, and generates data for filling the surrounding pixels located in the neighboring slice.
[0174] After generating fill data, in step S212 the second adaptive filter for border 184 uses the surrounding pixels including the fill data and the filter coefficient configured in the control unit 171 to perform filter processing of the pixel to be processed. After the filter processing is finished, the second adaptive filter for border 184 supplies the results of the filter processing to the selection unit 173 and the filter processing ends.
[0175] In the event that the value of the filter block flag is "1", selection unit 173 selects these filter processing results, and provides frame memory 114 as the decoded image subjected to filter processing in order to to be stored.
[0176] As described above, based on the value of the border control flag, the method for processing the filter as a pixel near the border is selected as appropriate from multiple methods, and thereby the adaptive filter processing unit 113 can suppress deterioration in filter processing effects due to local control of filter processing when coding. For example, by performing filter processing in order to slice through, the adaptive filter processing unit 113 can improve the image quality of the filter processing. In addition, by performing closed filter processing on the current slice, the adaptive filter processing unit 113 can perform filter processing with low delay.
[0177] At this time, the adaptive filter processing unit 113 selects the filter processing method based on the boundary control flag determined based on the system specification information, so filter processing can be carried out without breaking the processing.
[0178] In addition, the border control flag generating unit 132 sets up a border flag based on system specification information, and thereby, the adaptive filter processing unit 113 can be forced to perform filter processing at in order to suppress deterioration of effects.
[0179] That is to say, the image encoding device 100 can suppress deterioration in the effects of filter processing due to local control of filter processing when encoding.
[0180] Note that the lossless encoding unit 106 encodes the border control flag and adds it to the image compression information (incorporates into the header of the slice, for example). Consequently, the image encoding device 100 can cause an image decoding device that decodes the image compression information emitted by the image encoding device 100 to suppress deterioration in the effects of filter processing due to local control of image processing. filter made when decoding.
[0181] Now, "add" means to correlate the border control flag with the image compression information with an optional shape. For example, this can be described as a syntax of the image compression information, or it can be described as user data. In addition, the border control flag can be in a state linked with the image compression information as self-explanatory data. This is to say, "add" includes "embed", "describe", "multiplex", "associate", and so on.
[0182] In addition, as described above, a description was made to perform filter processing across slices or closed filter processing on the current slice, for pixels near the slice border, but filter processing can be done with other methods in the same way. In addition, instead of performing closed filter processing on the current slice, filter processing can be omitted, for example.
[0183] In addition, it is sufficient for multiple filter processing methods for a pixel near a slice boundary to have been prepared, and three or more methods can be prepared as options. In this case, two bits or more are required for the border control flag. Note that the number of bits in the border control flag is optional. However, the smaller the number of bits is, the more the deterioration of the encoding efficiency of the image compression information is suppressed, so unnecessarily increasing the number of bits is undesirable. <2. Second Mode> [Device configuration]
[0184] In the following, an image decoding device corresponding to the image coding device 100 described with the first embodiment will be described. Fig. 15 is a block diagram illustrating the configuration of an embodiment of an image decoding device serving as an image processing device to which the present invention has been applied.
[0185] An image decoding device 200 decodes image compression information emitted from the image encoding device 100, and generates a decoded image.
[0186] An image decoding device 200 is configured of, a temporary storage area 201, a lossless decoding unit 202, a reverse quantization unit203, a reverse orthogonal transform unit 204, a computing unit 205, and a filter of unlocking 206. the image decoding device 200 also has an adaptive filter processing unit 207. the image decoding device 200 still has a screen rearrangement storage area 208 and a D / A conversion unit ( Digital / a motion compensation unit 212, and a selection unit 213.
[0187] The temporary storage area 201 stores transmitted compressed image information. The lossless decoding unit 202 decodes information provided from the temporary storage area 201 and encoded by the lossless coding unit 106 in Fig. 1 using a format corresponding to the encoding format of the lossless coding unit 106.
[0188] In the event that the current macro block was intra coded, the lossless decoding unit 202 decodes the intraprediction mode information stored in the header portion of the image compression information, and transmits this information to the intraprediction unit 211. In addition Furthermore, in the event that the current macro block has been inter-coded, the lossless decoding unit 202 decodes the motion vector information stored in the header portion of the image compression information, and transmits the information from the same to the image compression unit. motion compensation 212.
[0189] In addition, lossless decoding unit 202 extracts control information for the adaptive filter (control information generated by the control information generation unit 112) from the slice header of the image compression information, and decodes, and supplies the same information for the adaptive filter processing unit 207.
[0190] The inverse quantization unit 203 submits the decoded image by the lossless decoding unit 202 for inverse quantization using a format corresponding to the quantization format of the quantization unit 105 in Fig. 1. The inverse orthogonal transform unit 204 submits the output of the unit from reverse quantization 203 to inverse orthogonal transform using a shape corresponding to the orthogonal transform shape of the orthogonal transform unit 104 in Fig. 1.
[0191] Computing unit 205 adds the prognostic image provided from selection unit 213 to the difference information subjected to the inverse orthogonal transform, and generates a decoded image. The unlock filter 206 removes block noise from the decoded image that was generated by the addition processing.
[0192] The adaptive filter processing unit 207 performs filter processing on the image provided from the unlock filter 206 based on the filter coefficient, ALF block size, filter block flag, and border control flag and the like , provided from the lossless coding unit. The adaptive filter processing unit 207 performs adaptive filter processing in the same way as with adaptive filter processing unit 113 in Fig. 1. Consequently, the adaptive filter processing unit 207 can reduce block noise and noise due to quantization that could not be completely removed with the 206 release filter.
[0193] The adaptive filter processing unit 207 supplies the image following filter processing to frame memory 210 in order to be stored as reference image information, and also outputs to the screen rearrangement buffer 208.
[0194] The screen rearrangement temporary storage area 208 performs image rearrangement. This is to say, the order of the rearranged frames for coding by the screen rearrangement buffer 102 in Fig. 1 is rearranged to the original display order. The D / A209 conversion unit performs D / A conversion of the supplied image from the 208 screen rearrangement temporary storage area, and issues. For example, the D / A209 conversion unit outputs the output signals obtained by converting D / A to a monitor not shown, and displays an image.
[0195] The intraprediction unit 211 generates a prognostic image based on the information provided from the lossless decoding unit 202 in the event that the current frame was intracoded, and outputs the prognostic image generated to the selection unit 213.
[0196] In the event that the current frame was intrododed, the motion compensation unit 212 performs motion compensation processing for the image reference information stored in frame memory 210, based on the motion vector information provided from the unit decoding without loss 202.
[0197] In the event that the current macro block was intrododed, the selection unit 213 connects to the intraprediction unit 211, and supplies the image provided from the intraprediction unit 211 to the computing unit 205 as a prognostic image. In addition, in the event that the current macro block has been intercoded, the selection unit 213 connects to the movement compensation unit 212 and supplies the image provided from the movement compensation unit 212 to the computing unit 205 as a prognostic image. [Processing Flow]
[0198] An example of the decoding processing flow that this image decoding device 200 performs will be described with reference to the flowchart in Fig. 16.
[0199] In step S301, the temporary storage area 201 stores the transmitted image. In step S302, the lossless decoding unit 202 extracts the control information for adaptive filter processing from the slice header of the image compression information, and decodes it in step S303. The decoded control information is provided for the adaptive filter processing unit 207.
[0200] In addition, in step S303, the lossless decoding unit 202 decodes the compressed image provided from the temporary storage area 201. Specifically, the image I, image P, and image B encoded by the lossless coding unit 106 in Fig. 1 are decoded.
[0201] At this point, the motion vector information, frame information, prognosis mode information (information indicating the intraprediction mode or interpretation mode), and so on are also decoded.
[0202] Specifically, in the event that the prognostic mode information is intrapredictive mode information, the prognostic mode information is provided for the intrapredictive unit 211. In the event that the prognostic mode information is interpretive mode information , motion vector information and reference frame information corresponding to the prognostic mode information are provided for the motion compensation unit 212.
[0203] In step S304, the inverse quantization unit203 inversely quantizes the transformation coefficient decoded in step S302 using a property corresponding to the property of quantization unit 105 in Fig. 1. In step S305, the inverse orthogonal transform unit 204 submits the coefficient of transformation to which quantization was inversely applied in step S204 to the inverse orthogonal transform using a property corresponding to the property of the orthogonal transform unit 104 in Fig. 1. This means that difference information corresponding to the input of the orthogonal transform unit 104 in Fig. 1 (the output of computing unit 103) has been decoded.
[0204] In step S306, computing unit 205 adds the prognosis image selected in processing in step S212 for the difference information. Therefore, the original image is decoded. In step S307, the unlock filter 206 submits the image emitted from the computing unit 205 for filtering. Therefore, block noise is removed.
[0205] In step S308, the adaptive filter processing unit 207 performs adaptive control filter processing to submit the image, submitted to the unlock filter processing, still for adaptive filter processing. This adaptive control filter processing is the same as the processing for which the adaptive filter processing unit 113 in Fig. 1 performs. That is to say, this adaptive control filter processing is the same as the case described with reference to the flowchart in Fig. 12, other than using the control information provided from the lossless decoding unit 202. Note however that , the control information provided from this lossless decoding unit 202 was generated by the control information generation unit 112 in Fig. 1, and is substantially equivalent to the control information provided from the control information generation unit 112 that the adaptive filter processing unit 113 in Fig. 1 uses.
[0206] Due to this adaptive filter control processing, block noise and noise due to quantization that could not be completely removed with the unlock filter processing can be reduced.
[0207] In step S309, frame memory 210 stores the image subjected to filtering.
[0208] In the event that intraprediction mode information was provided, in step S310 the intraprediction unit 211 performs intraprediction processing in the intraprediction mode. In addition, in the event that interpretation mode information was provided, in step S311 the movement compensation unit 212 performs movement compensation processing in the intraprediction mode.
[0209] In step S312, the selection unit 213 selects the prognostic image. This is to say, one of the prognosis image generated by the intrapredictive unit 211 and the prognostic image generated by the motion compensation unit 212 is selected, and the selected prognostic image is supplied to the computing unit 205.
[0210] For example, in the event of an image that has been intracoded, the selection unit 213 selects a prognostic image generated by the intraprediction unit 211 and supplies it to the computing unit 205. Also, in the event that an image is being intercoded, the selection unit 213 selects a prognostic image generated by the motion compensation unit 212 and supplies it to the computing unit 205.
[0211] In step S313, the screen rearrangement temporary storage area 208 performs rearrangement. Specifically, the frame sequence rearranged for encoding by the screen rearrangement buffer 102 of the image encoding device 100 is rearranged to the original display sequence.
[0212] In step S314, the D / A conversion unit 209 performs D / A conversion of the image from the screen rearrangement buffer 208. This image is output to a monitor not shown, and the image is displayed.
[0213] Therefore, with the image decoding unit 200, the lossless decoding unit 202 extracts control information provided from the image encoding device 100 and decodes, and the adaptive filter processing unit 207 performs adaptive filter processing. control (and filter processing) the same as with the adaptive filter processing unit 113 of the image coding device 100, using this control information.
[0214] By performing such adaptive filter control processing, the adaptive filter processing unit 207 can suppress deterioration in the effects of filter processing due to the local control of filter processing carried out when decoding.
[0215] Consequently, the image decoding device 200 can suppress deterioration in the effects of filter processing due to the local control of filter processing carried out when decoding. <3. Third Mode> [Image Processing System]
[0216] Note that while description has been made above that the system specification management unit 141 of the control information generation unit 112 maintains or corrects system specification information, system specification information can be made to include system specification information. image decoding device.
[0217] In this case, in the event that the image decoding device specification information is not known in advance, the image encoding device needs to collect the image decoding device specification information at a predetermined time, such as in time to connect in a communicable way, between the image encoding device and the image decoding device, for example. At this point, the image encoding device can communicate with the image decoding to obtain specification information from the image decoding device, or the specification entered by the user, for example, can be obtained.
[0218] Now, an image processing system not shown is a system where an image encoding device 300 shown in Fig. 17 and an image decoding device 400 shown in Fig. 18 are communicably connected via such a communication medium. like a network. The following description is a description of the configuration of the devices.
[0219] Fig. 17 is a block diagram illustrating another example of an image encoding device serving as an image processing device to which the present invention has been applied.
[0220] The image encoding device 300 is basically the same device as the image encoding device 100 in Fig. 1, and has an image encoding unit 301.
[0221] The configuration of the image coding unit 301 is the same as the configuration of the image coding device 100, having the A / D conversion unit 101 through the rate control unit 119, and operates in the same way as with the case described with the first modality.
[0222] In addition to the image coding unit 301, the image coding device 300 still has an input unit 302, communication unit 303, and an information collection unit.
[0223] Input unit 302 accepts user operations and the like. The communication unit 303 communicates with the image decoding device 400 via a network or the like. The information collection unit 304 collects specification information from the image decoding device 400 entered via the input unit 302 or specification information provided from the image decoding device 400 via the communication unit 303. The collection unit of information 304 supplies the specification information collected for the system specification management unit 141 of the control information generation unit 112.
[0224] Fig. 18 is a block diagram illustrating another example of an image decoding device serving as an image processing device to which the present invention has been applied.
[0225] The image decoding device 400 is basically the same device as the image decoding device 200 in Fig. 15, and has an image decoding unit 401.
[0226] The configuration of the image decoding unit 401 is the same as the configuration of the image decoding device 200, having the temporary storage area 201 through the selection unit 213, and operates in the same way as with the case described with the second modality.
[0227] In addition to the image decoding unit 401, the image decoding device 400 still has an information supply unit 402 and a communication unit 403.
[0228] The information supply unit 402 has specification information from the image decoding device 400, and based on the request from the image encoding device 300, provides the specification information. The communication unit 403 communicates with the image coding device 300 via a network or the like. The communication unit 403 accepts a request from the image coding device 300, and supplies it to the information supply unit 402. The communication unit 403 also supplies the specification information of the image decoding device 400 provided to from the information supply unit 402 according to the request for the image coding device 300. [Processing Flow]
[0229] An example of the flow of exchange of specification information with such an image processing system will be described with reference to the flowchart in Fig. 19.
[0230] In step S401, the information collection unit 304 of the image encoding device 300 requests the image decoding device 400 for specification information of the image decoding device 400 via the communication unit 303. Upon receipt of the request on the In step S421, the communication unit 403 of the image decoding device 400 fulfills the request for the information supply unit 402.
[0231] In step S422, the information supply unit 402 supplies the specification information from the image decoding device 400 to the requesting image encoding device 300 via the communication unit 403, as a response to the request.
[0232] After obtaining the specification information in step S402 via the communication unit 303, the information collection unit 304 of the image coding device 300 supplies it to the system specification management unit 141 of the control information generation unit 112.
[0233] In step S403, the image encoding unit 301 performs encoding processing based on the specification information, and generates a code sequence. In step S404, the image coding unit 301 supplies the code sequence generated for the image decoding device 400.
[0234] In step S423, the image decoding unit 401 of the image decoding device 400 obtains the supplied code sequence from the image coding device 300. In step S424, the image decoding unit 401 performs decoding processing as to the code string.
[0235] Accordingly, specification information from the image decoding device 400 is exchanged prior to image encoding processing and image decoding processing, so the image encoding device 300 can create border control flags based on the specification information. of the system including the specification information of the image decoding device 400.
[0236] Consequently, the image encoding device 300 and the image decoding device 400 can suppress deterioration in the filter processing effects due to the local filter processing control performed when encoding or decoding, as described with the first and second modes. <4. Fourth Mode> [Description of QALF]
[0237] ALF blocks can have a quadratic tree structure, as described with NPL 3. This technique is called QALF (Adaptive Loop Filter based on Quadratic Tree). The quadratic tree structure is the hierarchical structure where, at a lower hierarchical level, the region of an ALF block one hierarchical level above is divided into four.
[0238] Fig. 20 illustrates an example where division of the ALF block is expressed by the quadratic tree structure where the maximum number of layers is three, with a filter block flag being specified for each ALF block.
[0239] The in Fig. 20 indicates a layer 0 that is an ALF block serving as the root of the quadratic tree structure. In the quadratic tree structure, each ALF block has a block partition flag indicating whether or not it is divided into four at the lower hierarchical level. The value of the block partition flag of the ALF block shown in A in Fig. 20 is "1". That is to say, this block of ALF is divided into four at the lower hierarchical level (layer 1). B in Fig. 20 shows layer 1. That is to say, four ALF blocks are formed in layer 1.
[0240] In the event that the block partition flag is "0", an additional lower hierarchical level is not divided into four. This is to say, there is no further division, and a filter block flag is generated for that ALF block. That is to say, an ALF block of which the block partition flag is "0" also has a filter block flag. The "0" to the left of "0-1" shown in B in Fig. 20 indicates the block partition flag of that ALF block, and the "1" on the right shows the filter block flag of that ALF block.
[0241] The two ALF blocks of which the block partition flag at layer 1 is "1" are divided into four at the bottom hierarchical level (layer 2). C in Fig. 20 illustrates layer 2. That is to say, ten ALF blocks are formed in layer 2.
[0242] In the same way, ALF blocks with the block partition flag "0" in layer 2 are also assigned to a filter block flag. In C in Fig. 20, the block partition flag for an ALF block is "1". This is to say, that ALF block is divided into four at the additional lower hierarchical level (layer 3). D in Fig. 20 shows layer 3. This is to say, 13 ALF blocks are formed in layer 3.
[0243] Forming a quadratic tree as shown in Fig. 20, the structure of the ALF block finally becomes as shown in Fig. 21. Therefore, with a quadratic tree structure, the size of ALF blocks differs with each hierarchical level. This is to say, using a quadratic tree structure, the sizes of the ALF blocks can be made to be different from each other within the frame.
[0244] Control of the filter block flag in each ALF block is the same as with the other modalities described above. That is to say, filter processing is not performed in regions where the value of the filter block flag is "0" (the portions shaded in Fig. 21).
[0245] Fig. 22 illustrates an example of coding the region of slice 1 in Fig. 5 using the QALF technique. Here, the region of the thick line 521 represents the region of slice 1. Regardless of the ALF structure, there may be cases where the surrounding pixels traverses multiple slices when performing filter processing on the pixels near a slice border. Consequently, the filter processing control method for pixels near a slice boundary can be carried out in the same way as with the modalities described above for the case of QALF in the same way.
[0246] This is to say, even with a case of ALF blocks of the quadratic tree structure, the image encoding device and the image decoding device can suppress deterioration in the effects of filter processing due to the local control of the filter processing carried out when encoding or decoding. <5. Fifth modality> [Personal computer]
[0247] The processing series can be performed by hardware, and can be performed by software. In this case, the configuration can be done as a personal computer as shown in Fig. 23, for example.
[0248] In Fig. 23, a CPU 601 of a personal computer 600 performs various types of processing following programs stored in ROM (Read Only Memory) 602 or programs loaded into RAM (Random Access Memory) 603 from a drive. storage 613. RAM 603 also stores data and so on necessary for CPU 601 to perform various types of processing, as appropriate.
[0249] CPU 601, ROM 602, and RAM 603 are mutually connected via a multipath 604. This multipath cable 604 is also connected to an input / output interface 610.
[0250] Connected to an input / output interface 610 is an input unit 611 made up of a keyboard, a mouse, and so on, an output unit 612 made up of a monitor such as a CRT (Cathode Ray Tube) or LCD ( Liquid Crystal Display) or the like, a speaker, and so on, a 613 storage unit made up of a hard drive, and so on, and a 614 communication unit made up of a modem, and so on. The communication unit 614 performs a communication process via networks including the Internet.
[0251] Also connected to the input / output interface 610 is an operating mechanism 615 as needed, to which a removable medium 621 such as a magnetic disk, an optical disk, an optical magnetic disk, semiconductor memory, or the like, is mounted as appropriate, and computer programs read from them are installed on the 613 storage unit as needed.
[0252] In the event of executing the series described above by software processing, a program configuring the software is installed from a network or recording medium.
[0253] This recording medium is not only configured from a removable medium 621 composed of a magnetic disk (including floppy disk), optical disk (including CD-ROM (Compact Disk - Read Only Memory), DVD (Digital Versatile Disk), disk optical magnetic (MD (Mini-Disk)), or semiconductor memory or the like, in which programs are recorded and distributed in order to distribute programs to users separately from the main unit of the device, but it is also configured as ROM 602, a hard drive included in storage unit 613, and so on, on which programs are recorded, distributed to users in a state of having been built into the device's main drive in advance.
[0254] Note that a program that the computer executes may be a program in which processing is carried out in sequence in time in the order described in this Specification, or it may be a program in which processing is carried out in parallel, or in a necessary time, such as when a call was made.
[0255] In addition, with this Specification, steps describing programs recorded on the recording medium include processing performed in sequence in time following the order described routinely, and also processing performed in parallel or individually, without necessarily being processed in sequence of time.
[0256] In addition, with the present specification, the term system represents the totality of devices configured from multiple devices (devices).
[0257] In addition, the configuration that has been described above as a device (or processing unit) can be divided and configured as multiple devices (or processing units). Conversely, configurations that have been described above as multiple devices (or processing units) can be integrated and configured as a single device (or processing unit). In addition, configurations other than those described above can be added to devices (or processing units) on a routine basis. In addition, part of a configuration of a given device (or processing unit) can be included in the configuration of another device (or another processing unit), while the configuration and operations of the system as a whole are substantially the same. That is to say, the modalities of the present invention are not restricted to the modalities described above, and that, several modifications can be made without departing from the essence of the present invention.
[0258] For example, the image encoding device 100, image decoding device 200, encoding device 300, and image decoding device 400 described above can be applied to the image of various electronic devices. The following description is a description of examples of the same. <6. Sixth Mode> [Television receiver]
[0259] Fig. 24 is a block diagram illustrating a main example of configuring a television receiver using the image decoding device 200 or image decoding device 400 to which the present invention has been applied.
[0260] A television receiver 1000 shown in Fig. 24 includes a terrestrial tuner 1013, a video decoder 1015, a video signal processing circuit 1018, a graphics generation circuit 1019, a panel operating circuit 1020, and a display panel 1021.
[0261] The 1013 terrestrial tuner receives the broadcast transmission wave signals from an analogue terrestrial broadcast transmission via an antenna, removes the modulation, obtains video signals, and supplies them to the 1015 video decoder. The 1015 video decoder submits the video signals provided from the terrestrial tuner 1013 for decoding processing, and supplies the digital component signals obtained for the video signal processing circuit 1018.
[0262] The video signal processing circuit 1018 submits the video data provided from the video decoder 1015 for predetermined processing such as noise removal or the like, and supplies the video data obtained for the graphics generation circuit 1019.
[0263] The 1019 graphics generation circuit generates video data from a program to be displayed on a 1021 display panel, or image data due to processing based on an application to be provided via a network, or the like, and supplies the video data or image data generated for the 1020 panel operating circuit. In addition, the 1019 graphics generation circuit also performs processing such as providing video data obtained by generating video (graphics) data for the user displaying a screen used for selecting an item or the like, and superimposing this on a program's video data, for the 1020 panel operating circuit as appropriate.
[0264] The panel operating circuit 1020 operates the display panel 1021 based on the data provided from the graphics generation circuit 1019 to display a program video, or the various screens mentioned above on the display panel 1021.
[0265] The 1021 display panel is made up of an LCD (Liquid Crystal Display) and so on, and displays the video of a program or the like according to the control by the 1020 panel operating circuit.
[0266] In addition, the television receiver 1000 also includes an audio A / D (Analog / Digital) conversion circuit 1014, an audio signal processing circuit 1022, an audio synthesis / echo cancellation circuit 1023, a circuit 1024 audio amplifier, and a 1025 speaker.
[0267] The terrestrial tuner 1013 removes the modulation of the received broadcast transmission wave signal, and thereby obtains not only a video signal, but also an audio signal. The terrestrial tuner 1013 supplies the audio signal obtained for the audio A / D conversion circuit 1014.
[0268] The audio A / D conversion circuit 1014 submits the supplied audio signal from the terrestrial tuner 1013 for A / D conversion processing, and supplies the digital audio signal obtained for the audio signal processing circuit 1022 .
[0269] The audio signal processing circuit 1022 submits the audio data provided from the audio A / D conversion circuit 1014 for predetermined processing such as noise removal or the like, and supplies the audio data obtained for the audio synthesis / echo cancellation circuit 1023.
[0270] The audio synthesis / echo cancellation circuit 1023 supplies the audio data provided from the audio signal processing circuit 1022 to the audio amplifier circuit 1024.
[0271] The 1024 audio amplifier circuit submits the audio data provided from the 1023 audio synthesis / echo cancellation circuit for D / A conversion processing, submits to the amplifier processing to adjust to the predetermined volume, and then outputs the audio from the 1025 speaker.
[0272] In addition, the television receiver 1000 also includes a 1016 digital tuner, and an MPEG 1017 decoder.
[0273] The 1016 digital tuner receives the broadcast transmission wave signals from a digital broadcast transmission (digital terrestrial broadcast transmission, BS digital broadcast transmission (Satellite broadcast transmission) / CS (Satellite communications)) via an antenna , removes modulation to obtain TS of MPEG (Transport Sequence of Moving Image Experts Group), and supplies it to the MPEG 1017 decoder.
[0274] The MPEG 1017 decoder unscrambles the scrambling provided for the MPEG TS provided from the digital tuner 1016, and extracts a sequence including the data from a program serving as a playback object (visualization object). The MPEG 1017 decoder decodes an audio package that composes the extracted sequence, supplies the audio data obtained for the 1022 audio signal processing circuit, and also decodes a video package that composes the sequence, and supplies the obtained video data. for the video signal processing circuit 1018. In addition, the MPEG 1017 decoder supplies EPG (Electronic Program Guide) data extracted from the MPEG TS to a 1032 CPU via a path not shown.
[0275] The television receiver 1000 uses the aforementioned image decoding device 200 or image decoding device 400 with the MPEG 1017 decoder to decode video packets in this manner. Note that the MPEG TS transmitted from the broadcasting station or the like has been encoded by the image encoding device 100 or by the image encoding device 300.
[0276] The MPEG 1017 decoder extracts and decodes control information provided from the image encoding device 100 or the image encoding device 300, in the same way as with the image decoding device 200 or the image decoding device 400 , and performs adaptive filter control processing (and filter processing) using this control information. Consequently, the MPEG 1017 decoder can suppress deterioration in the effects of local control of filter processing.
[0277] The video data provided from the MPEG 1017 decoder is, in the same way as with the video data provided from the video decoder 1015, subjected to pre-determined processing in the video signal processing circuit 1018, overlaid on the generated video data and so on on the graphics generation circuit 1019 as appropriate, supplied to the display panel 1021 via the panel operating circuit 1020, and the image of the same is displayed on it.
[0278] The audio data provided from the MPEG 1017 decoder is, in the same way as with the audio data provided from the audio A / D conversion circuit 1014, subjected to pre-determined processing in the processing circuit of audio signal 1022, supplied to the audio amplifier circuit 1024 via the audio synthesis / echo cancellation circuit 1023, and subjected to D / A conversion processing and amplifier processing. As a result, the audio adjusted to the predetermined volume is output from the 1025 speaker.
[0279] In addition, the television receiver 1000 also includes a microphone 1026, and an A / D conversion circuit 1027.
[0280] The A / D conversion circuit 1027 receives the user's audio signals collected by the microphone 1026 provided to the television receiver 1000 serving as for audio conversion, submits the received audio signal for A / D conversion processing, and supplies the digital audio data obtained for the 1023 audio synthesis / echo cancellation circuit.
[0281] In the event that the audio data from the user's television receiver 1000 (user A) was provided from the A / D conversion circuit 1027, the audio synthesis / echo cancellation circuit 1023 performs echo cancellation with the audio data of the user (user A) considered as an object, and emits audio data obtained by synthesizing the audio data of user A and other audio data, or the like from the 1025 speaker via the audio amplifier circuit 1024.
[0282] In addition, the television receiver 1000 also includes an audio codec 1028, an internal multipath cable 1029, SDRAM (Dynamic Synchronous Random Access Memory) 1030, flash memory 1031, a CPU 1032, a USB I / F (Universal Serial Connection Multiple Path Cable) 1033, and a 1034 network I / F.
[0283] The A / D conversion circuit 1027 receives the user's audio signal collected by the microphone 1026 supplied to the television receiver 1000 for audio conversion, submits the received audio signal for A / D conversion processing, and supplies the digital audio data obtained for the 1028 audio codec.
[0284] The audio codec 1028 converts the audio data provided from the 1027 A / D conversion circuit into data of a predetermined format for transmission over a network, and supplies it to the network I / F 1034 via the multiple internal communication paths 1029.
[0285] Network AI / F 1034 is connected to a network via a cable mounted on a network terminal 1035. Network AI / F 1034 transmits the audio data provided from the audio codec 1028 to another device connected to a network of the even, for example. In addition, network I / F 1034 receives, via a network terminal 1035, the audio data transmitted from another device connected to it via a network, and supplies it to the audio codec 1028 via the multiple cable. internal communication routes 1029, for example.
[0286] The audio codec 1028 converts the audio data provided from network I / F 1034 into data of a predetermined format, and supplies it to the audio synthesis / echo cancellation circuit 1023.
[0287] The audio synthesis / echo cancellation circuit 1023 performs echo cancellation with the audio data provided from the audio codec 1028 and considered as an object, and outputs the audio data obtained by synthesizing the audio data and other data from audio, or the like, from speaker 1025 via the 1024 audio amplifier circuit.
[0288] The SDRAM 1030 stores various types of data necessary for a 1032 CPU to process.
[0289] Flash memory 1031 stores a program to be executed by CPU 1032. The program stored in flash memory 1031 is read by CPU 1032 at a predetermined time such as when activating television receiver 1000, or the like. EPG data obtained via digital broadcast transmission, data obtained from a predetermined server via a network, and so on are also stored in flash memory 1031.
[0290] For example, MPEG TS including content data obtained from a pre-determined server via the network by controlling CPU 1032 is stored in flash memory 1031. Flash memory 1031 supplies the MPEG TS of the same to the video decoder. MPEG 1017 via the internal multi-way communication cable 1029 by controlling a 1032 CPU, for example.
[0291] The MPEG 1017 decoder processes the MPEG-TS TS from it in the same way as with the case of the MPEG TS supplied from the digital tuner 1016. In this way, the television receiver 1000 receives the content data composed of video, audio , and so on via the network, decodes using the MPEG 1017 decoder, and thereby video from them can be displayed, and audio from them can be output.
[0292] In addition, the television receiver 1000 also includes a light receiving unit 1037 for receiving the infrared signal transmitted from a remote control 1051.
[0293] The light receiving unit 1037 receives an infrared beam from the remote control 1051, and emits a control code representing the content of the user's operation obtained through modulation removal, to the 1032 CPU.
[0294] CPU 1032 executes the program stored in flash memory 1031 to control the entire operation of the television receiver 1000 according to the control code provided from the light receiving unit 1037, and so on. The CPU 1032, and the television receiver units 1000 are connected via a path not shown.
[0295] AI / F of USB 1033 performs data transmission / reception to an external device of the television receiver 1000 that is connected via a USB cable mounted on a USB terminal 1036. AI / F of network 1034 connects to the network via a cable mounted on a 1035 network terminal, also transmits / receives data other than audio data for the various devices connected to the network.
[0296] The television receiver 1000 uses the image decoding device 200 or image decoding device 400 as the MPEG 1017 decoder, and thereby deteriorating the effects of local filter processing control as to the diffuse transmission signal received via an antenna or content data obtained via a network can be suppressed. <7. Seventh modality> [Cell phone]
[0297] Fig. 25 is a block diagram illustrating a primary example of configuring a cell phone using the image encoding device and image decoding device to which the present invention has been applied.
[0298] A cell phone 1100 shown in Fig. 25 includes a main control unit 1150 configured to integrally control the units, a power supply circuit unit 1151, an operation input control unit 1152, an image encoder 1153 , a camera I / F unit 1154, an LCD control unit 1155, an image decoder 1156, a separation / multiplexing unit 1157, a recording / playback unit 1162, a modulation / withdrawal circuit unit modulation 1158, and an audio codec 1159. These are mutually connected via a 1160 multipath cable.
[0299] In addition, the cell phone 1100 includes operating keys 1119, a CCD (Charged Coupled Device) camera 1116, a liquid crystal display 1118, a storage unit 1123, a transmitting / receiving circuit unit 1163, an antenna 1114, a microphone (MIC) 1121, and a speaker 1117.
[0300] After an end of call and power switch being turned on by a user operation, the 1151 power supply circuit unit activates the 1100 cell phone in an operational state supplying power to the units from a battery pack.
[0301] The cell phone 1100 performs various operations, such as transmitting / receiving an audio signal, transmitting / receiving electronic mail and image data, image photography, data recording, and so on, in various modes such as a voice call mode, data communication mode, and so on, based on the control of the main control unit 1150 composed of a CPU, ROM, RAM, and so on.
[0302] For example, in the voice call mode, the cell phone 1100 converts the audio signal collected by the microphone (m / ke) 1121 into digital audio data by the audio codec 1159, submits them to the spreading processing of the spectrum in the modulation / de-modulation circuit 1158, and submits them for digital / analog conversion processing and frequency conversion processing on the 1163 transmit / receive circuit unit. The cell phone 1100 transmits the transmission signal obtained by the conversion processing from the same to a base station not shown via an 1114 antenna. The transmission signal (audio signal) transmitted to the base station is provided to the other party's cell phone via the public telephone network.
[0303] In addition, for example, in the voice call mode, the cell phone 1100 amplifies the reception signal received on the antenna 1114, on the transmit / receive circuit unit 1163, still submits for frequency conversion processing and analog conversion processing / digital, submits for reverse spread spreading processing on the 1158 modulation / de-modulation circuit unit, and converts it into an analog audio signal via the 1159 audio codec. The 1100 cell phone outputs the obtained and converted analog audio signal from the 1117 loudspeaker.
[0304] In addition, for example, in the event of transmitting electronic mail in data communication mode, the mobile phone 1100 accepts text data from electronic mail entered by operating the 1119 operating keys in the 1152 operating input control unit. cell phone 1100 processes the text data from it on the main control unit 1150, and displays it on the liquid crystal display 1118 via the LCD control unit 1155 as an image.
[0305] In addition, the cell phone 1100 generates data from e-mail data in the main control unit 1150 based on the text data accepted by the 1152 operation input control unit, user instructions, and so on. The cell phone 1100 submits its email data to spectrum spread processing on the 1158 modulation / withdrawal circuit unit, and submits to digital / analog conversion processing and frequency conversion processing on the circuit unit transmit / receive 1163. Cell phone 1100 transmits the transmit signal obtained by converting it to a base station not shown via antenna 1114. The transmit (electronic mail) signal transmitted to the base station is provided to a predetermined destination via the network, mail server, and so on.
[0306] In addition, for example, in the event of receiving an e-mail in data communication mode, the cell phone 1100 receives the signal transmitted from the base station via antenna 1114 with the transmission / reception circuit unit 1163, amplifies, and still undergoes frequency conversion processing and analog / digital conversion processing. The cell phone 1100 submits the reception signal to reverse spectrum spreading processing on the modulation / withdrawal circuit unit 1158 to restore original email data. The cell phone 1100 displays e-mail data re-stored on the 1118 LCD monitor via the LCD control unit 1155.
[0307] Note that the cell phone 1100 can record (store) the e-mail data received on the storage unit 1123 via the recording / playback unit 1162.
[0308] This 1123 storage unit is an optional rewritable recording medium. The 1123 storage unit can be semiconductor memory such as RAM, built-in flash memory, or the like, it can be a hard disk, or it can be a removable medium such as a magnetic disk, an optical magnetic disk, an optical disk, memory USB, a memory card, or the like. Needless to say, the 1123 storage unit may be different than these.
[0309] Additionally, for example, in the event of transmitting image data in the data communication mode, the cell phone 1100 generates image data through image processing on the 1116 CCD camera. The 1116 CCD camera includes a CCD serving as a device optical such as a lens, diaphragm, and so on, and serving as a photoelectric conversion device, which photographs an object, converts the intensity of light received into an electrical signal, and generates image data from an image of the object. The 1116 CCD camera performs compression encoding of the image data in an 1153 image encoder via the 1154 camera I / F unit, and converts it into the encoded image data.
[0310] Cell phone 1100 employs the image coding device 100 or image coding device 300 mentioned above as an image encoder 1153 to perform such processing. Consequently, in the same way as with the image coding device 100 or the image coding device 300, an image encoder 1053 can suppress effect deterioration due to local filter processing control.
[0311] Note that, at this moment simultaneously, the cell phone 1100 converts the audio collected in the microphone (mike) 1121, while photographing with the CCD camera 1116, from analog to digital in the audio codec 1159, and still encodes it.
[0312] The cell phone 1100 multiplexes the encoded image data provided from an 1153 image encoder, and the digital audio data provided from the 1159 audio codec on the 1157 separation / multiplexing unit using a predetermined method. The cell phone 1100 submits the multiplexed data obtained as a result of it to spread spread processing in the 1158 modulation / withdrawal circuit unit, and submits for digital / analog conversion processing and frequency conversion processing in the unit transmission / reception circuit 1163. Cell phone 1100 transmits the signal for transmission obtained by converting it to a base station not shown via antenna 1114. The signal for transmission (image data) transmitted to the base station is provided to the other party via the network or the like.
[0313] Note that in the event that image data is not transmitted, the cell phone 1100 can also display the image data generated in the CCD camera 1116 on the LCD monitor 1118 via the LCD control unit 1155 instead of the image encoder 1153.
[0314] In addition, for example, in the event of receiving data from a moving image file associated with a simple website or similar in the data communication mode, the cell phone 1100 receives the signal transmitted from the base station on the transmission / reception circuit unit 1163 via antenna 1114, amplifies, and further undergoes frequency conversion processing and analog / digital conversion processing. Cell phone 1100 submits the received signal to reverse spectrum spreading processing on the 1158 modulation / withdrawal circuit unit to restore the original multiplexed data. The cell phone 1100 separates the multiplexed data from the cell phone in the 1157 separation / multiplexing unit into image data and encoded audio data.
[0315] The cell phone 1100 decodes the encoded image data in the 1156 image decoder using the decoding format corresponding to a predetermined encoding format such as MPEG2, MPEG4, or the like, and thereby generating moving image data from playback, and displays them on the 1118 LCD monitor via the 1155 LCD control unit. Therefore, moving image data included in a moving image file associated with a simple website is displayed on the LCD monitor. 1118, for example.
[0316] Cell phone 1100 employs the image decoding device 200 or image decoding device 400 mentioned above as an image decoder 1156 to perform such processing. Consequently, in the same way as with the image decoding device 200 or image decoding device 400, an image decoder 1156 extracts and decodes control information provided from the image encoding device 100 or the image encoding device 300, and performs adaptive control filter processing (and filtering processing) using the control information. Therefore, an 1156 image decoder can suppress effect deterioration due to local filter processing control.
[0317] At this point, the 1100 cell phone simultaneously converts the digital audio data into an analog audio signal in the 1159 audio codec, and outputs it from the 1117 speaker. Therefore, audio data included in a file moving image associated with a simple website is executed, for example.
[0318] Note that, in the same way as with the electronic mail case, the cell phone 1100 can record (store) the received data linked to a simple website or the like on the storage unit 1123 via the recording / playback unit 1162.
[0319] In addition, the cell phone 1100 analyzes the photographed two-dimensional code obtained by the CCD camera 1116 on the main control unit 1150, and through this, information recorded in the two-dimensional code can be obtained.
[0320] In addition, the cell phone 1100 can communicate with an external device on the 1181 infrared communication unit using infrared rays.
[0321] The cell phone 1100 employs the image coding device 100 or the image coding device 300 as the image encoder 1153, and thereby suppression of effect deterioration due to local control of filter processing taking into account data generated codes encoding image data generated on the CCD 1116 camera, for example.
[0322] For example, the 1100 cell phone can improve the image quality of filter processing results by performing filter processing across slices, and can provide encoded data with higher image quality for other cell phones. In addition, for example, by performing closed filter processing on the current slice, the cell phone 1100 can perform low delay filter processing, and can provide encrypted data to other less delayed cell phones.
[0323] In addition, the cell phone 1100 employs the image decoding device 200 or the image decoding device 400 as the image decoder 1156, and thereby suppression of effect deterioration due to local control of image processing can be performed. filter considering the data of a moving image file linked to a simple website or similar, for example.
[0324] For example, the 1100 cell phone can improve the image quality of filter processing results by performing filter processing through slices, and can perform high image quality from decoded images. In addition, for example, by performing closed filter processing on the current slice, the cell phone 1100 can perform filter processing with low delay, and can decode data encoded with lower delay.
[0325] Note that description has been made so far where the cell phone 1100 employs the CCD camera 1116, but the cell phone 1100 can employ an image sensor (CMOS image sensor) using CMOS (Complementary Metal Oxide Semiconductor) instead. 1116 CCD camera. In this case, in the same way, the cell phone 1100 can photograph an object and generate the image data of an image of the object in the same way as with the case of employing the 1116 CCD camera.
[0326] In addition, description has been made so far considering the cell phone 1100, but the image coding device 100 and the image decoding device 200 can be applied to any type of device in the same way as with the case of the cell phone 1100 while it is a device having the same image processing function and communication function as those of the 1100 cell phone, for example, such as a PDA (Personal Digital Assistant), smart phone, UMPC (Ultra Mobile Personal Computer), net book, notebook-sized personal computer, or the like. <8. Eighth Mode> [Hard disk recorder]
[0327] Fig. 26 is a block diagram illustrating a main example of configuring a hard disk recorder that employs the image encoding device and image decoding device to which the present invention has been applied.
[0328] A hard disk recorder (HDD recorder) 1200 shown in Fig. 26 is a device that stores, on the built-in hard disk, audio data and video data from a broadcast program included in broadcast broadcast wave signals. (television signals) received by a tuner and transmitted from a satellite or terrestrial antenna or the like, and provides the data stored to the user in time according to the instructions of the user.
[0329] The hard disk recorder 1200 can extract audio data and video data from broadcast transmission wave signals, decode these as appropriate, and store on the built-in hard disk, for example. In addition, the hard disk recorder 1200 can also retrieve audio data and video data from another device over the network, decode these as appropriate, and store on the built-in hard disk, for example.
[0330] Additionally, the hard disk recorder 1200 can decode audio data and video data recorded on the built-in hard disk, provide it to a 1260 monitor, display an image of it on the 1260 monitor screen, and output audio from it to the 1260 monitor. monitor speaker 1260, for example. In addition, the hard disk recorder 1200 can decode audio data and video data extracted from diffuse transmission signals obtained via a tuner, or audio data and video data obtained from another device via the network, providing them to monitor 1260, display an image of it on the screen of monitor 1260, and output audio from it from the speaker of monitor 1260, for example.
[0331] Of course, operations other than these can be performed.
[0332] As shown in Fig. 26, hard disk recorder 1200 includes a receive unit 1221, a modulation withdrawal unit 1222, a demultiplexer 1223, an audio decoder 1224, a video decoder 1225, and a control unit for recorder 1226. Hard disk recorder 1200 also includes EPG data memory 1227, program memory 1228, working memory 1229, a display converter 1230, an OSD control unit (Screen Display Device) 1231, a display control unit 1232, a recording / playback unit 1233, a D / A converter 1234, and a communication unit 1235.
[0333] In addition, the display converter 1230 includes a video encoder 1241. The recording / playback unit 1233 includes an encoder 1251 and a decoder 1252.
[0334] The receiving unit 1221 receives the infrared signal from the remote control (not shown), converts it to an electrical signal, and outputs it to the 1226 recorder control unit. The 1226 recorder control unit is configured from, for example , a microprocessor, and so on, and performs various types of processing according to the program stored in program memory 1228. At this time, recorder control unit 1226 uses working memory 1229 as needed.
[0335] The communication unit 1235, which is connected to the network, performs a communication process with another device via the network. For example, communication unit 1235 is controlled by recorder control unit 1226 to communicate with a tuner (not shown), and mainly to output a channel selection control signal to the tuner.
[0336] The modulation removal unit 1222 removes modulation from the supplied signal from the tuner, and outputs it to the demultiplexer 1223. The demultiplexer 1223 separates the data provided from the modulation removal unit 1222 into audio data, video data, and EPG data, and output to audio decoder 1224, video decoder 1225, and recorder control unit 1226, respectively.
[0337] The audio decoder 1224 decodes the input audio data, and outputs it to the recording / playback unit 1233. The video decoder 1225 decodes the input video data, and outputs it to the display converter 1230. The recorder control unit 1226 supplies the input EPG data to the EPG data memory 1227 for storage.
[0338] The 1230 display converter encodes the video data provided from the 1225 video decoder or the 1226 recorder control unit into, for example, video data in accordance with the NTSC (National Television Standards Commission) format using the video encoder 1241, and output to the recording / playback unit 1233. In addition, the display converter 1230 converts the screen size of the video data provided from the video decoder 1225 or the recorder control unit 1226 in the size corresponding to the size of the 1260 monitor, converts the video data which the screen size has been converted into the video data in accordance with the NTSC format using the 1241 video encoder, converts it into an analog signal, and outputs to the display control unit 1232.
[0339] The display control unit 1232 superimposes, under the control of the recorder control unit 1226, the OSD signal emitted from the OSD control unit (Screen Display Device) 1231 on the video signal input from the converter display 1230, and outputs the 1260 monitor display for display.
[0340] In addition, the audio data output from the 1224 audio decoder has been converted to an analog signal using the 1234 D / A converter, and supplied to the 1260 monitor. The 1260 monitor outputs this audio signal from a high built-in speaker.
[0341] The recording / playback unit 1233 includes a hard disk with a recording medium on which video data, audio data, and so on are recorded.
[0342] The recording / playback unit 1233 encodes the audio data provided from the audio decoder 1224 by the encoder 1251. In addition, the recording / playback unit 1233 encodes the video data provided from the video encoder 1241 of the video converter. display 1230 by encoder 1251. The recording / playback unit 1233 synthesizes the encoded data from the audio data of the same, and the encoded data from the video data of the same using the multiplexer. The recording / playback unit 1233 amplifies the synthesized data through channel encoding, and writes the data of the same to the hard disk via a recording head.
[0343] The recording / playback unit 1233 reproduces the data recorded on the hard disk via a reproduction head, amplifies, and separates it into audio data and video data using the demultiplexer. The recording / playback unit 1233 decodes the audio and video data by the 1252 decoder using the MPEG format. The 1233 recording / playback unit converts the decoded audio data from digital to analog and outputs it to the 1260 monitor speaker. In addition, the 1233 recording / playback unit converts the decoded video data from digital to analog, and emits to the 1260 monitor display.
[0344] The recorder control unit 1226 reads the latest EPG data from the EPG data memory 1227 based on user instructions indicated by the infrared signal from the remote control which is received via a 1221 receiving unit, and supplies to the OSD 1231 control unit. The OSD 1231 control unit generates image data corresponding to the entered EPG data, and outputs to the 1232 display control unit. The 1232 display control unit outputs the video input from the OSD 1231 control unit to the 1260 monitor display for display. Therefore, EPG (Electronic Program Guide) is displayed on the 1260 monitor display.
[0345] In addition, the hard disk recorder 1200 can obtain various types of data such as video data, audio data, EPG data, and so on provided from another device via the network such as the Internet or the like .
[0346] Communication unit 1235 is controlled by recorder control unit 1226 to obtain encoded data such as video data, audio data, EPG data, and so on transmitted from another device via the network, and to provide them to the recorder control unit 1226. The recorder control unit 1226 supplies the encoded data of the video data and audio data obtained to the recording / playback unit 1233, and stores it on the hard disk, for example. At this time, the recorder control unit 1226 and the recording / playback unit 1233 can perform such processing as recoding or the like as needed.
[0347] In addition, the recorder control unit 1226 decodes the encoded data of the video data and audio data obtained, and supplies the video data obtained for the 1230 display converter. The 1230 display converter processes, in the same way as video data provided from video decoder 1225, video data provided from recorder control unit 1226, supplies to monitor 1260 via display control unit 1232 to display an image of it.
[0348] Alternatively, an arrangement can be made, where according to this image display, the recorder control unit 1226 supplies the decoded audio data to the monitor 1260 via the D / A converter 1234, and outputs audio from it from the speaker.
[0349] In addition, the recorder control unit 1226 decodes the encoded data from the obtained EPG data, and supplies the decoded EPG data to the EPG data memory 1227.
[0350] The hard disk recorder 1200 thus being configured employs the image decoding device 200 or image decoding device 400 such as the video decoder 1225, decoder 1252, and decoder stored in the recorder control unit 1226. Consequently, in the same way that with the image decoding device 200 or image decoding device 400, the video decoder 1225, decoder 1252, and decoder stored in the recorder control unit 1226 extracts and decodes control information provided from the encoding device image 100 or the image encoding device 300, and performs adaptive control filter processing (and filter processing) using control information. Consequently, the video decoder 1225, the decoder 1252, and the decoder housed in the recorder control unit 1226 can suppress effect deterioration due to local control of the filter processing.
[0351] Consequently, hard disk recorder 1200 can suppress effect deterioration due to local control of filter processing by taking into account the video data received via the tuner or communication unit 1235, and the video data recorded on the hard disk of the recording unit / reproduction 1233, for example.
[0352] For example, the hard disk recorder 1200 can improve the image quality of the filter processing results by performing filter processing through slices, and can perform high image quality of decoded images. In addition, for example, by performing closed filter processing on the current slice, the hard disk recorder 1200 can perform low-delay filter processing, and can decode low-delay encoded data.
[0353] In addition, the hard disk recorder 1200 employs the image encoding device 100 or the image encoding device 300 as the encoder 1251. Consequently, in the same way as with the case of the image encoding device 100 or the image encoding device image encoding 300, encoder 1251 can perform suppression of effect deterioration due to local control of filter processing.
[0354] Consequently, the hard disk recorder 1200 can suppress effect deterioration due to the local control of the filter processing taking into account the encoded data recorded on the hard disk, for example.
[0355] For example, the hard disk recorder 1200 can improve the image quality of filter processing results by performing slicing filter processing, and can record encoded data with higher image quality on the hard disk. In addition, for example, by performing closed filter processing on the current slice, the hard disk recorder 1200 can perform low-delay filter processing, and can generate encrypted data and write to the lower-delay hard disk.
[0356] Note that the description has been made so far considering the hard disk recorder 1200 to record video data and audio data on the hard disk, but it goes without saying that any type of recording medium can be used. For example, even with a recorder to which a recording medium other than a hard disk, such as flash memory, optical disc, videotape, or the like, is applied, the image encoding device 100 and the device image decoding 200 can be applied to it in the same way as with the case of the hard disk recorder 1200 above. <9. Nina Modality> [Camera]
[0357] Fig. 27 is a block diagram illustrating a main example of configuring a camera employing the image encoding device and the image decoding device to which the present invention has been applied.
[0358] The camera 1300 shown in Fig. 27 photographs an object, displays an image of the object on a 1316 LCD, and records them on recording medium 1333 as image data.
[0359] The block 1311 lens enters light (ie, image of an object) into a CCD / CMOS 1312. The CCD / CMOS 1312 is an image sensor employing a CCD or CMOS, which converts the intensity of light received into an electrical signal, and supplies for the 1313 camera signal processing unit.
[0360] The 1313 camera signal processing unit converts the electrical signal supplied from the 1312 CCD / CMOS to Y, Cr, and Cb color difference signals and supplies it to a 1314 image signal processing unit. image signal processing 1314 submits, under the control of a 1321 controller, the image signal supplied from the camera signal processing unit 1313 for predetermined image processing, or encodes the image signal thereof via of a 1341 encoder using the MPEG format for example. The image signal processing unit 1314 supplies encoded data generated by encoding an image signal to a 1315 decoder. In addition, the image signal processing unit 1314 obtains data for display generated on a screen display device (OSD) ) 1320, and supplies them to the 1315 decoder.
[0361] With the aforementioned processing, the 1313 camera signal processing unit appropriately takes advantage of DRAM (Dynamic Random Access Memory) 1318 connected via a 1317 multipath cable to maintain image data, encoded data encoded from of the image data of the same, and so on in DRAM 1318 of the same according to the need.
[0362] The decoder 1315 decodes the encoded data provided from the image signal processing unit 1314, and supplies the obtained image data (decoded image data) to the LCD 1316. In addition, the decoder 1315 supplies the display data provided to from image signal processing unit 1314 to LCD 1316. LCD 1316 synthesizes the image of the decoded image data, and the image of the data for display, provided from the decoder 1315 as appropriate, and displays a synthesis image the same.
[0363] The 1320 on-screen display device outputs, under the control of the 1321 controller, data for display such as a screen or menu icon or the like consisting of a symbol, characters, or a picture to the 1314 image signal processing unit via the 1317 multipath cable.
[0364] Based on a signal indicating the content commanded by the user using a 1322 operator unit, the 1321 controller performs various types of processing, and also controls the 1314 image signal processing unit, DRAM 1318, external interface 1319, display device on screen 1320, media operation mechanism 1323, and so on via the 1317 multipath cable. A program, data, and so on required for the 1321 controller performing various types of processing are stored in ROM FLASH 1324 .
[0365] For example, controller 1321 can encode image data stored in DRAM 1318, or decode encrypted data stored in DRAM 1318 instead of image signal processing unit 1314 and decoder 1315. At this point, controller 1321 can perform encoding and processing decoding using the same format as the encoding and decoding format of the image signal processing unit 1314 and decoder 1315, or you can perform encoding and decoding processing using a format that neither the image signal processing unit 1314 nor decoder 1315 can handle.
[0366] In addition, for example, in the event that start of image printing was instructed from operator unit 1322, controller 1321 reads image data from DRAM 1318, and supplies it to printer 1334 connected to an external interface 1319 via the 1317 multipath cable for printing.
[0367] Additionally, for example, in the event that image recording was instructed from the 1322 operator unit, the 1321 controller reads data encoded from DRAM 1318, and supplies it to a 1333 recording medium mounted on the media operator mechanism. 1323 via the 1317 multipath cable for storage.
[0368] Recording medium 1333 is an optional readable / rewritable removable medium, for example, such as a magnetic disk, an optical magnetic disk, an optical disk, semiconductor memory, or the like. Needless to say, the 1333 recording medium is also optional considering the type of a removable medium, and consequently it can be a tape device, or it can be a disc, or it can be a memory card. It is not necessary that the 1333 recording medium can be a non-contact IC card or the like.
[0369] Alternatively, the media operating mechanism 1323 and the recording medium 1333 can be configured to be integrated into a non-transportable recording medium, for example, such as a built-in hard disk operating mechanism, SSD (State Mechanism) Solid), or the like.
[0370] The external 1319 interface is configured, for example, with a USB input / output terminal and so on, and is connected to the 1334 printer in the event of printing an image. In addition, an operating mechanism 1331 is connected to the external interface 1319 according to need, in which the removable medium 1332 such as a magnetic disk, optical disk, or optical magnetic disk is mounted as appropriate, and a computer program read it is installed in the FLASH ROM 1324 according to the need.
[0371] In addition, external interface 1319 includes a network interface to be connected to a predetermined network such as a LAN, the Internet, or the like. For example, according to instructions from operator unit 1322, controller 1321 can read data encoded from DRAM 1318, and provides it from external interface 1319 to another device connected via the network. In addition, controller 1321 can obtain, via external interface 1319, encoded data or image data provided from another device via the network, and keep it in DRAM 1318, or provide it to the signal processing unit of image 1314.
[0372] The camera 1300 thus being configured employs the image decoding device 200 or the image decoding device 400 as the decoder 1315. Consequently, in the same way as with the image decoding device 200 or the image decoding device 400, the decoder 1315 extracts and decodes control information provided from the image coding device 100 or the image coding device 300, and performs adaptive control filter processing (and filter processing) using the control information. Consequently, decoder 1315 can suppress effect deterioration due to local control of filter processing.
[0373] Consequently, the camera 1300 can suppress deterioration of effects due to the local control of filter processing considering, for example, from the image data generated in the CCD / CMOS 1312, the encoded data of the video data read from DRAM 1318 or recording medium 1333, and encoded data from video data obtained via the network.
[0374] For example, the 1300 camera can improve the image quality of the filter processing results by performing filter processing through slices, and can perform high image quality of the decoded images. In addition, for example, by performing closed filter processing on the current slice, the 1300 camera can perform low-delay filter processing, and can decode low-delay encoded data.
[0375] In addition, camera 1300 employs image encoding device 100 or image encoding device 300 as encoder 1341. Consequently, in the same way as with the case of image encoding device 100 or image encoding device 300, encoder 1341 can perform suppression of effect deterioration due to local control of filter processing.
[0376] Consequently, the camera 1300 can suppress deterioration of the effects due to the local control of the filter processing considering the encoded data recorded in DRAM 1318 or recording medium 1333, or encoded data to be supplied to other devices, for example.
[0377] For example, the 1300 camera can improve the image quality of filter processing results by performing filter processing across slices, and can record data encoded with higher image quality on DRAM 1318 or 1333 recording medium, or provide it to others devices. In addition, for example, by performing closed filter processing on the current slice, the 1300 camera can perform low-delay filter processing, and can generate encoded data and write to DRAM 1318 or recording medium 1333, or provide it to other devices , with lower delay.
[0378] Note that the decoding method of the image decoding device 200 or the image decoding device 400 can be applied to the decoding processing that the 1321 controller performs. In the same way, the encoding method of the image encoding device 100 or the image encoding device 300 can be applied to the encoding processing that the controller 1321 performs.
[0379] In addition, the image data that the 1300 camera captures can be moving images or can be still images.
[0380] As a matter of course, the image encoding device 100, image decoding device 200, image encoding device 300, and image decoding device 400 can be applied to the devices or systems other than the devices described above .
[0381] In addition, the size of macro blocks is not restricted to 16 x 16 pixels. Application can be made for macro blocks of various sizes, such as that of 32 x 32 pixels shown in Fig. 28, for example.
[0382] While description was made above with flag information and the like being multiplexed (described) in the bit stream, flags and image data (or bit stream) can be transmitted (recorded), for example, in addition to being multiplexed. A shape can be made where the flag and image data (or bit stream) are associated (added) in the same way.
[0383] Association (addition) indicates a state in which image data (or bit strings) and flags are mutually associated (a correlated state), and the relationship of physical position is arbitrary. For example, image data (or bit streams) and flags can be transmitted over separate transmission paths. In addition, the image data (or bit stream) and flags can each be recorded on separate recording media (or on separate recording areas within the same recording medium). Note that the increments in which the image data (or bit strings) and flags are associated are optional, and can be configured in encoding processing increments (one frame, multiple frames, etc.), for example. List of Reference Signs 100 image encoding device 112 control information generation unit 113 adaptive filter processing unit 132 border control flag generation unit 141 system specification management unit 142 unit of determination 161 pixel to be processed 162 surrounding pixels 163 slice border 171 control unit 172 adaptive filter 173 selection unit 181 temporary storage memory area 182 adaptive slice filter 183 first adaptive border filter 184 second adaptive border filter 200 image decoding device 202 lossless decoding unit 207 adaptive filter processing unit 300 image encoding device 301 image encoding unit 302 input unit 303 communication unit 304 information collection unit 400 image decoding unit 401 image decoding unit 402 information supply unit 403 communication unit
权利要求:
Claims (12)
[0001]
Image encoding device, FEATURED by understanding: circuits configured for: get an image block position based on an image block block size, generate a filter block flag configured to control whether or not the filtering process is performed on the image block, generate a border control flag configured to control whether the filtering process, carried out on a target pixel, which is adjacent to a slice border separating a slice that includes said target pixel and a neighboring slice and that is provided within the block slice image, uses pixels from said neighboring slice after the pixels from said neighboring slice become available or if the filtering process, performed on the target pixel, uses pixels from the slice only, the pixels that should be used are surrounding pixels that surround said target pixel, generate block size information for the image block, perform a filtration process on the image block in position according to the filter block flag, the border control flag and the block size information, and encode the image block, in which the filtering process is carried out and the filter block flag, the border control flag and the block size information, thus generating encoded data, in which, in the event that the border control flag indicates to use the slice pixels only, said filtering process for said target pixel to be processed is performed assuming a fill value for each of the surrounding pixels that are in said neighboring slice, through which the assumed fill value is based on pixels of the slice only.
[0002]
Image encoding device according to claim 1, CHARACTERIZED by the fact that the border control flag is included in a slice slice header including the image block.
[0003]
Image coding device, according to claim 1 or 2, CHARACTERIZED by the fact that it further comprises: a CCD imaging sensor.
[0004]
Image encoding device, according to claim 3, CHARACTERIZED by the fact that it further comprises: a camera interface unit implemented by the circuits and configured to interface between the CCD imaging sensor and a communication path.
[0005]
Image encoding device according to claim 3 or 4, CHARACTERIZED by the fact that it further comprises: an operator key user interface.
[0006]
Image coding device, according to claim 5, CHARACTERIZED by the fact that it further comprises: an operation input control unit implemented by the circuits and configured to interface between the operation key user interface and an access path. Communication.
[0007]
Image encoding device according to any one of claims 1 to 6, CHARACTERIZED by the fact that it further comprises: an antenna.
[0008]
Image encoding device, according to claim 7, CHARACTERIZED by the fact that it further comprises: a modulation unit implemented by the circuits and configured to receive an input from a communication path and to submit the input for processing. spreading of spectrum.
[0009]
Image encoding device according to claim 8, CHARACTERIZED by the fact that it further comprises: a transmission unit implemented by the circuits and configured to receive an output from the modulation unit, to subject the output to conversion processing analog / digital and frequency conversion processing, and to transmit the output to the antenna.
[0010]
Image encoding device according to any one of claims 7 to 9, CHARACTERIZED by the fact that it further comprises: a reception unit implemented by the circuits and configured to receive an input from the antenna and to submit the input for processing analog / digital conversion and frequency conversion processing.
[0011]
Image encoding device according to claim 10, CHARACTERIZED by the fact that it further comprises: a demodulation unit implemented by the circuitry and configured to receive an output from the receiving unit, to subject the output to processing spread spectrum and to transmit the output to a communication path.
[0012]
Image encoding method, FEATURED by understanding: get an image block position based on an image block block size, generate a filter block flag to control whether or not the filtering process is performed on the image block, generate a border control flag to control whether the filtering process, carried out on a target pixel, which is adjacent to a slice border separating a slice that includes said target pixel and a neighboring slice and that is provided within the block of slice image, uses pixels from said neighboring slice after the pixels from said neighboring slice become available or if the filtering process, performed on the target pixel, uses pixels from the slice only, the pixels to be used are surrounding pixels that surround the said target pixel, generate block size information for the image block, perform a filtration process on the image block in position according to the filter block flag, the border control flag and the block size information, and encode the image block, in which the filtering process is carried out and the filter block flag, the border control flag and the block size information, thus generating encoded data, where, in the event that the border control flag indicates to use the slice pixels only, said filtering process for said target pixel to be processed is performed assuming a fill value for each of the surrounding pixels that are in said neighboring slice, through which the assumed fill value is based on pixels of the slice only.
类似技术:
公开号 | 公开日 | 专利标题
BR112012001639B1|2021-01-26|device and image encoding method
RU2581014C1|2016-04-10|Image processing device
BR122014030189A2|2019-08-20|IMAGE PROCESSING MACHINE AND METHOD
JPWO2018131515A1|2019-11-07|Image processing apparatus and image processing method
JP6229895B2|2017-11-15|Encoding apparatus and encoding method, and decoding apparatus and decoding method
JP2015073213A|2015-04-16|Image decoder, image encoder, encoded data converter, and interest area display system
BR112012003285B1|2021-09-28|IMAGE PROCESSING DEVICE AND METHOD
同族专利:
公开号 | 公开日
CN104935937A|2015-09-23|
US20120121188A1|2012-05-17|
KR101649775B1|2016-08-19|
KR20160099727A|2016-08-22|
RU2012102382A|2013-07-27|
KR102126165B1|2020-06-24|
EP3661207B1|2021-09-01|
EP2790409A3|2014-11-19|
RU2018117534A3|2021-09-13|
EP3496402A1|2019-06-12|
RU2543306C2|2015-02-27|
KR101872657B1|2018-06-28|
US20160309193A1|2016-10-20|
PL2802150T3|2019-10-31|
KR20140065021A|2014-05-28|
CN102511165B|2015-06-17|
KR20180070716A|2018-06-26|
CN103957408B|2018-05-18|
TW201448569A|2014-12-16|
US20130301942A1|2013-11-14|
KR20190114025A|2019-10-08|
US9392269B2|2016-07-12|
JP5344238B2|2013-11-20|
CN102511165A|2012-06-20|
KR20120039657A|2012-04-25|
EP2790409A2|2014-10-15|
RU2656712C2|2018-06-06|
US8805085B2|2014-08-12|
TWI647947B|2019-01-11|
CN103957408A|2014-07-30|
TWI694711B|2020-05-21|
RU2014123684A|2015-12-20|
EP3661207A1|2020-06-03|
EP2802150B1|2019-04-24|
US9918108B2|2018-03-13|
TW201728170A|2017-08-01|
HUE045448T2|2019-12-30|
EP2461588A1|2012-06-06|
EP2790409B1|2019-04-24|
EP2802150A1|2014-11-12|
TW202029752A|2020-08-01|
RU2014119668A|2015-11-20|
WO2011013580A1|2011-02-03|
ES2733072T3|2019-11-27|
US20200314454A1|2020-10-01|
TWI723849B|2021-04-01|
EP3496402B1|2020-04-01|
KR102030954B1|2019-10-10|
KR20180137038A|2018-12-26|
RU2018117534A|2019-11-11|
US20180146215A1|2018-05-24|
CN103957407A|2014-07-30|
TWI595775B|2017-08-11|
US8774537B2|2014-07-08|
EP2461588A4|2014-11-12|
HUE044436T2|2019-10-28|
KR20170049637A|2017-05-10|
US10721494B2|2020-07-21|
CN104935937B|2019-03-19|
KR101649805B1|2016-08-19|
TW201114265A|2011-04-16|
BR112012001639A2|2016-04-12|
KR101747063B1|2017-06-14|
TW201921930A|2019-06-01|
JP2011035620A|2011-02-17|
US20140270559A1|2014-09-18|
ES2733068T3|2019-11-27|
KR101931730B1|2018-12-24|
CN103957407B|2017-08-15|
PL2790409T3|2019-10-31|
RU2660612C2|2018-07-06|
TWI472230B|2015-02-01|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

JPS5344238B2|1973-04-06|1978-11-27|
JP3297293B2|1996-03-07|2002-07-02|三菱電機株式会社|Video decoding method and video decoding device|
US6788880B1|1998-04-16|2004-09-07|Victor Company Of Japan, Ltd|Recording medium having a first area for storing an audio title set and a second area for storing a still picture set and apparatus for processing the recorded information|
AU717480B2|1998-08-01|2000-03-30|Korea Advanced Institute Of Science And Technology|Loop-filtering method for image data and apparatus therefor|
JP2002150281A|2000-11-16|2002-05-24|Sony Corp|Image compression processing device and method|
BR0206157A|2001-10-17|2004-01-06|Matsushita Electric Ind Co Ltd|Moving Figure Coding Method and Moving Figure Decoding Method|
US7423691B2|2001-11-19|2008-09-09|Matsushita Electric Industrial Co., Ltd.|Method of low latency interlace to progressive video format conversion|
JP3769534B2|2001-11-29|2006-04-26|松下電器産業株式会社|Decryption method|
US7330596B2|2002-07-17|2008-02-12|Ricoh Company, Ltd.|Image decoding technique for suppressing tile boundary distortion|
US7995849B2|2003-03-17|2011-08-09|Qualcomm, Incorporated|Method and apparatus for improving video quality of low bit-rate video|
JP2005033336A|2003-07-08|2005-02-03|Ntt Docomo Inc|Apparatus and method for coding moving image, and moving image coding program|
US7324596B2|2003-07-15|2008-01-29|Lsi Logic Corporation|Low complexity block size decision for variable block size motion estimation|
JP4282399B2|2003-07-16|2009-06-17|パナソニック株式会社|Wireless master unit and wireless slave unit|
EP1515561B1|2003-09-09|2007-11-21|Mitsubishi Electric Information Technology Centre Europe B.V.|Method and apparatus for 3-D sub-band video coding|
WO2006013854A1|2004-08-05|2006-02-09|Matsushita Electric Industrial Co., Ltd.|Image decoding device and image encoding device|
KR100679035B1|2005-01-04|2007-02-06|삼성전자주식회사|Deblocking filtering method considering intra BL mode, and video encoder/decoder based on multi-layer using the method|
US20070014348A1|2005-04-12|2007-01-18|Nokia Corporation|Method and system for motion compensated fine granularity scalable video coding with drift control|
KR100716998B1|2005-05-24|2007-05-10|삼성전자주식회사|Encoder and Decoder for reducing blocking phenomenon, method therefor, and recording medium storing A program to implement thereof|
JP2007027956A|2005-07-13|2007-02-01|Matsushita Electric Ind Co Ltd|Deblocking filter|
US20080130989A1|2005-07-22|2008-06-05|Mitsubishi Electric Corporation|Image encoder and image decoder, image encoding method and image decoding method, image encoding program and image decoding program, and computer readable recording medium recorded with image encoding program and computer readable recording medium recorded with image decoding program|
JP2007129369A|2005-11-01|2007-05-24|Matsushita Electric Ind Co Ltd|Image reproducing apparatus and method|
JP4987322B2|2006-02-28|2012-07-25|株式会社東芝|Moving picture decoding apparatus and moving picture decoding method|
JP4605052B2|2006-03-03|2011-01-05|日本電気株式会社|Moving picture encoding method, moving picture decoding method, moving picture encoding apparatus, moving picture decoding apparatus, filter apparatus, and program|
CN102158697B|2006-09-07|2013-10-09|Lg电子株式会社|Method and apparatus for decoding/encoding of a video signal|
WO2008060127A1|2006-11-17|2008-05-22|Lg Electronics Inc.|Method and apparatus for decoding/encoding a video signal|
CN101395921B|2006-11-17|2012-08-22|Lg电子株式会社|Method and apparatus for decoding/encoding a video signal|
US9445128B2|2006-12-08|2016-09-13|Freescale Semiconductor, Inc.|System and method of determining deblocking control flag of scalable video system for indicating presentation of deblocking parameters for multiple layers|
US20080170624A1|2007-01-12|2008-07-17|Mitsubishi Electric Corporation|Image encoding device and image encoding method|
JP4847890B2|2007-02-16|2011-12-28|パナソニック株式会社|Encoding method converter|
WO2008134482A1|2007-04-26|2008-11-06|Polycom, Inc.|De-blocking filter arrangements|
US8422803B2|2007-06-28|2013-04-16|Mitsubishi Electric Corporation|Image encoding device, image decoding device, image encoding method and image decoding method|
JP4821723B2|2007-07-13|2011-11-24|富士通株式会社|Moving picture coding apparatus and program|
US8437564B2|2007-08-07|2013-05-07|Ntt Docomo, Inc.|Image and video compression using sparse orthonormal transforms|
WO2009044475A1|2007-10-04|2009-04-09|Fujitsu Limited|Coding device, coding method, coding program, and coding circuit|
US20090316793A1|2008-06-20|2009-12-24|Yang Zhijie Michael|Method and system for adaptive deblocking for avs1-p2|
KR101538704B1|2009-01-28|2015-07-28|삼성전자주식회사|Method and apparatus for coding and decoding using adaptive interpolation filters|
JP5359657B2|2009-07-31|2013-12-04|ソニー株式会社|Image encoding apparatus and method, recording medium, and program|
JP5233897B2|2009-07-31|2013-07-10|ソニー株式会社|Image processing apparatus and method|
JP5344238B2|2009-07-31|2013-11-20|ソニー株式会社|Image encoding apparatus and method, recording medium, and program|US7295609B2|2001-11-30|2007-11-13|Sony Corporation|Method and apparatus for coding image information, method and apparatus for decoding image information, method and apparatus for coding and decoding image information, and system of coding and transmitting image information|
JP5359657B2|2009-07-31|2013-12-04|ソニー株式会社|Image encoding apparatus and method, recording medium, and program|
JP5344238B2|2009-07-31|2013-11-20|ソニー株式会社|Image encoding apparatus and method, recording medium, and program|
JP5604825B2|2009-08-19|2014-10-15|ソニー株式会社|Image processing apparatus and method|
JP5914962B2|2010-04-09|2016-05-11|ソニー株式会社|Image processing apparatus and method, program, and recording medium|
TWI600318B|2010-05-18|2017-09-21|Sony Corp|Image processing apparatus and image processing method|
EP3675493A1|2010-06-04|2020-07-01|Sony Corporation|Image processing apparatus and method|
RU2578665C2|2010-12-07|2016-03-27|Сони Корпорейшн|Image processing apparatus and image processing method|
EP3748962A1|2010-12-07|2020-12-09|Sony Corporation|Image processing device and image processing method|
JP5741076B2|2010-12-09|2015-07-01|ソニー株式会社|Image processing apparatus and image processing method|
WO2012081609A1|2010-12-14|2012-06-21|シャープ株式会社|Image filter device, decoder, encoder, and data structure|
US10244239B2|2010-12-28|2019-03-26|Dolby Laboratories Licensing Corporation|Parameter set for picture segmentation|
CN103392338A|2011-01-03|2013-11-13|联发科技股份有限公司|Method of filter-unit based in-loop filtering|
ES2708940T3|2011-02-10|2019-04-12|Velos Media Int Ltd|Image processing device and image processing procedure|
US8923633B2|2011-03-07|2014-12-30|Panasonic Intellectual Property Corporation Of America|Image decoding method, image coding method, image decoding apparatus, and image coding apparatus|
MX2013010159A|2011-03-11|2013-09-26|Sony Corp|Image processing device and method.|
JP2012213128A|2011-03-24|2012-11-01|Sony Corp|Image processing device and method|
TWI530161B|2011-06-07|2016-04-11|Sony Corp|Image processing apparatus and method|
JP5907367B2|2011-06-28|2016-04-26|ソニー株式会社|Image processing apparatus and method, program, and recording medium|
KR102067368B1|2011-06-28|2020-01-16|소니 주식회사|Image processing device and image processing method|
JP2013074491A|2011-09-28|2013-04-22|Sony Corp|Image processing device and method|
US9462298B2|2011-10-21|2016-10-04|Qualcomm Incorporated|Loop filtering around slice boundaries or tile boundaries in video coding|
US9247258B2|2011-10-26|2016-01-26|Qualcomm Incorporated|Unified design for picture partitioning schemes|
US20130107973A1|2011-10-28|2013-05-02|Qualcomm Incorporated|Loop filtering control over tile boundaries|
CN103907348A|2011-10-31|2014-07-02|三菱电机株式会社|Moving picture encoding device, moving picture decoding device, moving picture encoding method, and moving picture decoding method|
US9635359B2|2011-11-28|2017-04-25|Canon Kabushiki Kaisha|Method and apparatus for determining deblocking filter intensity|
CN103200400B|2012-01-09|2018-03-16|中兴通讯股份有限公司|A kind of decoding method, codec and the electronic equipment of image layer and sliced layer|
US9930379B2|2012-01-31|2018-03-27|Comcast Cable Communications, Llc|System and method for data stream fragmentation|
NZ616358A|2012-02-03|2015-03-27|Mediatek Inc|Method and apparatus for loop filtering cross tile or slice boundaries|
US20130279882A1|2012-04-23|2013-10-24|Apple Inc.|Coding of Video and Audio with Initialization Fragments|
KR102124141B1|2012-07-02|2020-06-18|선 페이턴트 트러스트|Image decoding method, image encoding method, image decoding device, image encoding device, and image encoding and decoding device|
JP5885604B2|2012-07-06|2016-03-15|株式会社Nttドコモ|Moving picture predictive coding apparatus, moving picture predictive coding method, moving picture predictive coding program, moving picture predictive decoding apparatus, moving picture predictive decoding method, and moving picture predictive decoding program|
WO2014097596A1|2012-12-19|2014-06-26|パナソニック株式会社|Image encoding method, image decoding method, image encoding device, and image decoding device|
WO2016056398A1|2014-10-06|2016-04-14|ソニー株式会社|Image processing device and method|
CN104683819A|2015-01-31|2015-06-03|北京大学|Adaptive loop filter method and device|
TWI575937B|2015-07-24|2017-03-21|矽創電子股份有限公司|Image prediction and decoding method and encoding / decoder|
WO2018022011A1|2016-07-26|2018-02-01|Hewlett-Packard Development Company, L.P.|Indexing voxels for 3d printing|
WO2018070786A1|2016-10-14|2018-04-19|세종대학교 산학협력단|Image encoding method/device, image decoding method/device, and recording medium in which bitstream is stored|
US11057624B2|2016-10-14|2021-07-06|Industry Academy Cooperation Foundation Of Sejong University|Image encoding method/device, image decoding method/device, and recording medium in which bitstream is stored|
US10469876B2|2016-12-22|2019-11-05|Mediatek Inc.|Non-local adaptive loop filter combining multiple denoising technologies and grouping image patches in parallel|
CN112335250A|2018-06-28|2021-02-05|韩国电子通信研究院|Method and apparatus for encoding/decoding image and recording medium for storing bitstream|
WO2020013498A1|2018-07-13|2020-01-16|엘지전자 주식회사|Method for processing image service in content service system, and device therefor|
WO2020262928A1|2019-06-23|2020-12-30|엘지전자 주식회사|Deblocking filtering method and apparatus in video/image coding system|
法律状态:
2018-03-06| B25A| Requested transfer of rights approved|Owner name: VELOS MEDIA INTERNATIONAL LIMITED (IE) |
2018-03-27| B15K| Others concerning applications: alteration of classification|Ipc: H04N 19/86 (2014.01), G06T 9/40 (2006.01), H04N 19 |
2019-01-15| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]|
2020-01-07| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]|
2020-12-01| B09A| Decision: intention to grant [chapter 9.1 patent gazette]|
2021-01-26| B16A| Patent or certificate of addition of invention granted|Free format text: PRAZO DE VALIDADE: 10 (DEZ) ANOS CONTADOS A PARTIR DE 26/01/2021, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
申请号 | 申请日 | 专利标题
JP2009-179395|2009-07-31|
JP2009179395A|JP5344238B2|2009-07-31|2009-07-31|Image encoding apparatus and method, recording medium, and program|
PCT/JP2010/062398|WO2011013580A1|2009-07-31|2010-07-23|Image processing apparatus and method|
[返回顶部]