![]() Decoding an image method
专利摘要:
METHOD OF DETERMINING AN INTRA PREDICTION MODE OF A CODING UNIT OF A CURRENT IMAGE, EQUIPMENT FOR DETERMINING AN INTRA PREDICTION MODE OF A CODING UNIT OF A CURRENT IMAGE, METHOD OF DETERMINING AN INTRA PREDICTION MODE OF A DECODING UNIT OF A CURRENT IMAGE A CURRENT IMAGE, AND EQUIPMENT TO DECODE AN IMAGE. A method and apparatus for determining an intra-prediction mode of an encoding unit. Candidate intra prediction modes of a chrominance component encoding unit, which include an intra prediction mode of a luminance component encoding unit, are determined, and component encoding unit costs according to the modes determined candidates are compared to determine a minimum cost intra prediction mode to be the intra prediction mode of the chrominance component encoding unit. 公开号:BR112012025310B1 申请号:R112012025310-7 申请日:2011-04-05 公开日:2022-02-15 发明作者:Jung-hye MIN;Elena Alshina;Woo-jin Han 申请人:Samsung Electronics Co., Ltd; IPC主号:
专利说明:
Technical Field [0001] Exemplary modalities refer to the encoding and decoding of an image, and more specifically, to methods and equipment for encoding and decoding an image, in which intra prediction is performed in a chrominance component encoding unit by applying a intra prediction mode, determined for a luminance component encoding unit. Fundamentals of Technique [0002] In an image compression method, such as Motion Picture Expert Group (MPEG)-1, MPEG-2, MPEG-4 or Advanced Video Coding (AVC) H.264/MPEG-4, an illustration is divided into macroblocks to encode an image. Each of the macroblocks is encoded in all encoding modes that can be used in inter prediction, or intra prediction, and then it is encoded in an encoding mode that is selected according to a bit rate to encode the macroblock and a degree distortion of a decoded macroblock based on the original macroblock. [0003] As hardware for playing and storing high-resolution or high-quality video content is developed and provided, there is a growing need for a video codec capable of effectively encoding or decoding high-resolution video content. or high quality. In a conventional video codec, a video is encoded in units of macroblocks each having a predetermined size. Disclosure of Invention Technical Problem [0004] In a conventional video codec, a video is encoded in units of macroblocks each having a predetermined size. Also, in a conventional video codec, the directivity of the intra mode is limited. Solution to the Problem [0005] Exemplary embodiments include a method of determining an intra prediction mode of a luminance component encoding unit that has diverse directionality based on hierarchical encoding units that have various sizes, and methods and equipment for encoding and decoding an image, wherein intra prediction is performed on a chrominance component encoding unit in accordance with candidate intra prediction modes, including an intra prediction mode determined for a luminance component encoding unit. Advantageous Effects of the Invention [0006] According to exemplary embodiments, by adding the intra prediction mode of the luminance component encoding unit having diverse directionality as the intra prediction mode of the chrominance component encoding unit, the prediction efficiency of an image of a chrominance component and the prediction efficiency of an entire image without having to increase throughput. Brief Description of Drawings [0007] Figure 1 is a block diagram of an equipment for encoding a video, according to an exemplary embodiment; [0008] Figure 2 is a block diagram of an equipment for decoding a video, according to an exemplary embodiment; [0009] Figure 3 is a diagram to describe a concept of encoding units, according to an exemplary embodiment; [00010] Figure 4 is a block diagram of an image encoder based on encoding units, according to an exemplary embodiment of the present invention; [00011] Figure 5 is a block diagram of an image decoder based on encoding units, according to an exemplary embodiment; [00012] Figure 6 is a diagram illustrating deeper encoding units according to depths and prediction units, according to an exemplary embodiment; [00013] Figure 7 is a diagram for describing a relationship between an encoding unit and a transform unit, according to an exemplary embodiment; [00014] Figure 8 is a diagram for describing encoding information of encoding units corresponding to an encoded depth, according to an exemplary embodiment; [00015] Figure 9 is a diagram of deeper coding units according to depths, according to an exemplary embodiment; [00016] Figures 10A and 10B are diagrams for depicting a relationship between encoding units, prediction units, and transform units, according to an exemplary embodiment; [00017] Figure 11 is a table showing encoding information according to encoding units, according to an exemplary embodiment; [00018] Figures 12A to 12C are diagrams of formats of a luminance component image and a chrominance component image, according to exemplary embodiments; [00019] Figure 13 is a table showing a number of intra prediction modes according to the sizes of luminance component encoding units, according to an exemplary embodiment; [00020] Figures 14A to 14C are diagrams for explaining intra prediction modes applied to a luminance component encoding unit having a predetermined size, according to an exemplary embodiment; [00021] Figure 15 is a diagram for explaining intra prediction modes applied to a luminance component encoding unit having a predetermined size, according to an exemplary embodiment; [00022] Figure 16 is a reference diagram for explaining intra prediction modes of a luminance component encoding unit having various directionalities, according to an exemplary embodiment; [00023] Figure 17 is a reference diagram for explaining a bilinear mode according to an exemplary embodiment; [00024] Fig. 18 is a diagram for explaining a process of generating a prediction value of an intra prediction mode and a current luminance component encoding unit, according to an exemplary embodiment; [00025] Figure 19 is a reference diagram for explaining a process of mapping intra prediction modes between luminance component encoding units having different sizes, according to an exemplary embodiment; [00026] Fig. 20 is a reference diagram for explaining a process of mapping an intra prediction mode of a neighboring luminance component encoding unit to one of the representative intra prediction modes according to an exemplary embodiment; [00027] Figure 21 is diagram for explaining candidate intra prediction modes applied to a chrominance component encoding unit, according to an exemplary embodiment; [00028] Figure 22 is a block diagram of an intra prediction equipment, according to an exemplary embodiment; [00029] Figure 23 is a flowchart illustrating a method of determining an intra prediction mode of an encoding unit, according to an exemplary embodiment; and [00030] Figure 24 is a flowchart illustrating a method of determining an intra prediction mode of an encoding unit, according to an exemplary embodiment. [00031] Figure 25 is a diagram to explain a relationship between a current pixel and neighboring pixels located on an extended line that has a directivity of (dx, dy); [00032] Figure 26 is a diagram to explain a change in a neighboring pixel located in an extended line that has a directivity of (dx, dy) according to a current pixel location, according to an exemplary embodiment; and [00033] Figures 27 and 28 are diagrams for explaining a method of determining an intra prediction mode direction, in accordance with exemplary embodiments. Best Mode for Carrying Out the Invention [00034] According to one aspect of an exemplary embodiment, there is provided a method of determining an intra-prediction mode of an encoding unit of a current image, the method comprising: dividing a luminance component of the current image into at least one unit luminance component encoding based on a maximum encoding unit which is an encoding unit in which the current image is encoded having a maximum size, and a depth indicating hierarchical division information of the maximum encoding unit; determining an intra prediction mode of the at least one luminance component encoding unit; comparing the application costs to the intra prediction modes, chrominance component unit candidates of the chrominance component encoding unit and the intra prediction mode of the at least one luminance component encoding unit; and determining an intra prediction mode of the chrominance component encoding unit from among the prediction modes, candidates of the chrominance component unit and the intra prediction mode determined from the at least one luminance component encoding unit having a cost minimum, based on a comparison result. [00035] According to an aspect of the exemplary embodiment, there is provided an apparatus for determining an intra prediction mode of an encoding unit of a current image, the apparatus comprising: an intra luminance predictor which determines an intra luminance prediction mode a luminance component encoding unit which is divided from a maximum encoding unit which is an encoding unit in which the current image is encoded having a maximum size, and a depth which indicates hierarchical division information of the encoding unit maximum; and an intra chrominance predictor that compares application costs to a chrominance component encoding unit that is divided from the intra prediction modes, maximum encoding unit candidates of the chrominance component encoding unit, and the chrominance component encoding unit mode. luminance component encoding unit intra prediction mode, and determines the chrominance component encoding unit intra prediction mode from among the candidate chrominance component unit unit prediction modes and the encoding unit intra prediction mode of luminance component having minimal cost, based on a comparison result. [00036] According to an aspect of an exemplary embodiment, there is provided a method of determining an intra prediction mode of a decoding unit of a current image, the method comprising: extracting a maximum encoding unit which is an encoding unit in the which the current image is encoded having a maximum size, and a depth indicating hierarchical division information of the maximum encoding unit, from a bit stream; dividing a luminance component and a chrominance component of the current image to be decoded into at least one luminance component decoding unit and at least one chrominance component decoding unit, respectively, based on the maximum encoding unit and depth ; extracting intra prediction mode information indicating an intra prediction mode applied to at least one luminance component decoding unit and at least one chrominance component decoding unit from the bit stream; and performing intra prediction on the at least one luminance component decoding unit and the at least one chrominance component decoding unit based on the extracted intra prediction mode information to decode the at least one luminance component decoding unit and the at least one chrominance component decoding unit. [00037] According to one aspect of an exemplary embodiment, an apparatus is provided for decoding a picture, the apparatus comprising: an entropy decoder which extracts a maximum encoding unit which is an encoding unit in which the current picture is encoded having a maximum size, a depth indicating the hierarchical division information of the maximum encoding unit, and intra prediction mode information indicating an intra prediction mode applied to a luminance component decoding unit and a component decoding unit of chrominance to be decoded, from a stream of bits; and an intra prediction executor which performs intra prediction on the luminance component decoding unit and the chrominance component decoding unit for decoding the luminance component decoding unit and the chrominance component decoding unit, in accordance with the intra, extracted prediction mode. Mode for Invention [00038] Hereafter, exemplary embodiments will be described more fully with reference to the accompanying drawings. [00039] Next, a "decoding unit" refers to an encoding data unit in which the image data is encoded on an encoder side, and an encoded data unit in which the encoded image data is decoded in one decoder side. Also, an encoded depth refers to a depth at which an encoding unit is encoded. [00040] Figure 1 is a block diagram of video encoding equipment 100, according to an exemplary embodiment. [00041] The video encoding equipment 100 includes a maximum encoding unit divider 110, an encoding unit determiner 120, an output unit 130 and an encoding information output unit 140. [00042] Maximum encoding unit splitter 110 can divide a current image based on a maximum encoding unit for the current frame of an image. The current frame is larger than the maximum encoding unit, the image data of the current frame can be divided into at least one maximum encoding unit. The maximum encoding unit according to an exemplary embodiment may be a data unit having a size of 32x32, 64x64, 128x128, 256x256, etc., wherein a format of the data unit is a square having a width and height in squares. of 2. The image data may be output to the encoding unit determiner 120 according to at least one maximum encoding unit. [00043] An encoding unit according to an exemplary embodiment may be characterized by a maximum size and a depth. Depth denotes a number of times the encoding unit is spatially divided from the maximum encoding unit, and as the depth is deepened or increased, deeper encoding units, according to the depths, can be divided. from the maximum encoding unit to a minimum encoding unit. A maximum encoding unit depth is a higher depth and a minimum encoding unit depth is a lower depth. As the size of an encoding unit corresponding to each depth decreases as the depth of the maximum encoding unit is deepened, an encoding unit corresponding to a higher depth may include a plurality of encoding units corresponding to lower depths. [00044] As described above, the current frame image data is divided into maximum encoding units according to a maximum encoding unit size, and each of the maximum encoding units may include deeper encoding units which are divided according to depths. As the maximum encoding unit according to an exemplary embodiment is divided according to depths, the image data of a spatial domain included in the maximum encoding unit can be hierarchically classified according to depths. [00045] The maximum depth and maximum size of an encoding unit, which limit the total number of times the height and width of the maximum encoding unit are hierarchically divided, can be predetermined. Such a maximum encoding and maximum depth unit can be established in an image or in a slice unit. In other words, different maximum encoding units and different maximum depths can be set for each image or slice, and a size of a minimum encoding unit included in the maximum encoding unit can be set according to the maximum depth. As such, by setting the maximum encoding unit and maximum depth according to the images or slices, the encoding efficiency can be improved by encoding an image of a flat region using the maximum encoding unit, and the efficiency of Compression of an image can be improved by encoding an image that has high complexity by using an encoding unit that is smaller in size than the maximum encoding unit. [00046] Encoding unit determiner 120 determines different maximum depths according to maximum encoding units. The maximum depth can be determined based on a rate distortion (R-D) cost calculation. The determined maximum depth is output to the coding information output unit 140, and the image data according to the maximum coding units is output to the image data output unit 130. [00047] The image data in the maximum encoding unit is encoded based on the deepest encoding units corresponding to at least a depth equal to or below the maximum depth, and the encoding results of the image data are compared based on each one. of the deeper encoding units. A depth having the smallest coding error can be selected after comparing the coding errors of the deepest coding units. At least one coding depth can be selected for each maximum coding unit. [00048] The maximum encoding unit size is divided as an encoding unit is divided hierarchically according to depths, and as the number of encoding units increases. Furthermore, even if the encoding units correspond to the same depth in a maximum encoding unit, it is determined whether to divide each of the encoding units corresponding to the same depth to a lesser depth by decreasing an encoding error of the image data of each encoding unit separately. Consequently, even when the image data is included in a maximum encoding unit, the image data is divided into regions according to the depths, and the encoding errors may differ according to the regions in that maximum encoding unit, and so encoded depths may differ according to regions in the image data. Thus, one or more coded depths can be determined in a maximum coding unit, and the image data of the maximum coding unit can be divided according to the coding units of at least one coded depth. [00049] Also, encoding units having different sizes in the maximum encoding unit can be predicted or transformed based on data units having different sizes. In other words, video encoding equipment 100 can perform various operations to encode an image based on data units having various sizes and formats. To encode the image data, operations such as prediction, transformation, entropy encoding, etc. are performed, at which time the same data unit can be used for all operations or different data units can be used for each operation. [00050] For example, the video encoding equipment 100 can select a data unit that is different from the encoding unit, to predict the encoding unit. For example, when an encoding unit has a size of 2Nx2N (where N is a positive integer), a data unit for prediction can have a size of 2Nx2N, 2NxN, Nx2, or NxN. In other words, motion prediction can be performed based on a data unit obtained by dividing at least one of a height and a width of the encoding unit. Then the data unit which is a basic prediction unit will be referred to as a prediction unit. [00051] A prediction mode can be at least one of an intra mode, an inter mode, and a jump mode, where a certain prediction mode is performed only on a prediction unit that has a certain size or shape. For example, an intra mode can only be performed on a square prediction unit having a size of 2Nx2N or NxN. Also, a jump mode can only be performed on a prediction unit having a size of 2Nx2N. If multiple prediction units are included in the encoding unit, the prediction can be performed on each prediction unit to select a prediction mode having a minimum error. [00052] Alternatively, the video encoding equipment 100 may transform the image data based on a data unit which is different from the encoding unit. To transform the encoding unit, the transformation can be performed based on a data unit having a size less than or equal to the encoding unit. A unit of data used as a basis for the transformation will be referred to as a unit of transform. [00053] Encoding unit determiner 120 can measure a coding error of the deepest coding units according to depths by using Rate Distortion Optimization based on Lagrangian multipliers to determine a unit division format of maximum encoding having an optimal encoding error. In other words, the encoding unit determiner 120 can determine the formats of the encoding units to be divided from the maximum encoding unit, wherein the sizes of the encoding units are different according to the depths. [00054] The image data output unit 130 outputs the image data from the maximum encoding unit, which is encoded based on at least an encoded depth determined by the encoding unit determiner 120, in the bit streams. As encoding is already performed by the encoding depth determiner 120 to measure the minimum encoding error, an encoded data stream can be output using the minimum encoding error. [00055] The coding information output unit 140 may output information about the coding mode according to the coded path, which is coded based on at least one coded depth determined by the coding unit determiner 120, in streams of bits. Information about the encoding mode according to the encoded depth may include information indicating the encoded depth, information indicating the type of division in the prediction unit, information indicating the prediction mode, and information indicating the size of the unit. of transformed. [00056] Encoded depth information can be set using depth-division information, which indicates whether encoding is performed in encoding units of a lower depth rather than an actual depth. If the current depth of the current encoding unit is the encoded depth, the image data in the current encoding unit is encoded and output, and so the division information can be set not to divide the current encoding unit to a lower depth. Alternatively, if the current depth of the current encoding unit is not the encoded depth, encoding is performed in the encoding unit of the lowest depth, and so the division information can be set to divide the current encoding unit to obtain the units lowest depth encoding. [00057] If the current depth is not the encoded depth, encoding is performed in the encoding unit which is divided into the lowest depth encoding unit. As there is at least one lowest depth encoding unit in a current depth encoding unit, encoding is performed repeatedly on each lowest depth encoding unit, and thus encoding can be performed recursively for the lowest depth encoding units. encoding that have the same depth. [00058] As encoding units having a tree structure are determined for a maximum encoding unit, and information about at least one encoding mode is determined for a encoding unit of a coded depth, information about at least one encoding mode encoding can be set to a maximum encoding unit. Furthermore, an encoded depth of the image data of the maximum encoding unit may be different according to the locations since the image data is hierarchically divided according to the depths, and thus the encoded depth and mode information encoding can be adjusted for the image data. [00059] Accordingly, the encoding information output unit 140 can assign corresponding encoding information to each minimum encoding unit included in the maximum encoding unit. In other words, the encoded depth encoding unit includes at least one minimal encoding unit containing the same encoding information. Thus, if neighboring minimum encoding units have the same encoding information, the neighboring minimum encoding units may be the minimum encoding units included in the same maximum encoding unit. [00060] In the video encoding equipment 100, the deepest encoding unit may be an encoding unit obtained by dividing a height or width of a higher depth encoding unit, which is one layer above, by two. In other words, when the current depth encoding unit size is 2Nx2N, the bottom depth encoding unit size is NxN. Also, the current depth coding unit having the size of 2Nx2N can include a maximum of 4 of the lower depth coding units. [00061] Consequently, the video decoding equipment 100 can determine the encoding units having an optimal format for each maximum encoding unit, based on the maximum encoding unit size and the maximum depth determined considering the characteristics of the current image. Furthermore, as encoding can be performed on each maximum encoding unit using any of several prediction and transformation modes, an optimal encoding mode can be determined by considering encoding unit characteristics of various image sizes. [00062] Thus, if an image having high resolution or large amount of data is encoded in a conventional macroblock, the number of macroblocks per frame increases excessively. Consequently, a number of pieces of compressed information generated for each macroblock increases, and so it is difficult to transmit the compressed information and the efficiency of compressing the data decreases. However, by using the video encoding equipment 100, the image compression efficiency can be increased once an encoding unit is adjusted while considering the characteristics of an image while increasing a maximum size of an encoding unit while considering a image size. [00063] Figure 2 is a block diagram of a video decoding equipment 200, according to an exemplary embodiment. [00064] Referring to Figure 2 the video decoding equipment 200 includes a receiver 210, an image data and encoding information extractor 220, and an image data decoder 230. [00065] Receiver 210 receives and analyzes a bit stream received by video decoding equipment 200 to obtain image data according to the maximum encoding units, and outputs the image data to image data decoder 230. Receiver 210 can extract information about the maximum encoding unit of a current image or slice from a header about the current image or slice. The video decoding apparatus 200 decodes the image data according to the maximum encoding units. [00066] Encoding information extractor 220 analyzes a bit stream received by video decoding equipment 200 and extracts information about an encoded depth and encoding mode according to maximum encoding units from the current image header in the analyzed bit stream. Information about the extracted encoded depth, and encoding mode, is output to the picture data decoder 230. [00067] Information about the encoded depth, and the encoding mode, according to the maximum encoding unit, can be set to information about at least one encoding unit corresponding to the encoded path, and information about an encoding mode may including division type information of a prediction unit according to encoding units, information indicating a prediction mode, and information indicating a size of a transform unit. In addition, the splitting information according to the depths can be extracted as encoded depth information. [00068] Information about a maximum encoding unit division format may include information about encoding units that have different sizes according to depths, and information about an encoding mode may include information indicating a prediction unit according to encoding units, information indicating a prediction mode, and information indicating a transform unit. [00069] The picture data decoder 230 restores the current picture by decoding the picture data in each maximum encoding unit based on the information extracted by the encoding information extractor 220. The picture data decoder 230 can decode the unit encoding included in the maximum encoding unit based on information about a division format of the maximum encoding unit. A decoding process may include prediction, intra prediction and motion compensation, and inverse transform. [00070] Alternatively, the image data decoder 230 restores the current image by decoding the image data in each maximum encoding unit based on the encoded depth information and the encoding mode in accordance with the maximum encoding units. In other words, the image data decoder 230 can decode the image data according to the encoding units of at least one encoded depth, based on the information about the encoded depth according to the maximum encoding units. A decoding process can include prediction, including intra prediction and motion compensation, and inverse transform. [00071] The picture data decoder 230 can perform intra prediction or motion compensation in a prediction unit and a prediction mode according to the coding units, based on the information about the division type and prediction mode of the prediction unit of the coding unit according to the coded depths, to perform the prediction according to the coding units. Furthermore, the picture data decoder 230 can perform the inverse transform according to each transform unit in the encoding unit, based on information about the transform unit size of the encoding unit according to the encoded depths, to perform the inverse transform according to the maximum encoding units. [00072] The picture data decoder 230 can determine an encoded depth of a current maximum encoding unit using depth division information. If the split information indicates that decoding is performed at the current depth, the current depth is an encoded depth. Accordingly, the image data decoder 230 can decode the encoded image data of a current depth coding unit with respect to the image data of the current maximum coding unit by using information about the division type of the prediction unit, the prediction mode, and the size of the transform unit. In other words, the coding information assigned to the minimum coding unit can be observed, and the minimum coding units including the coding information having the same division information can be grouped so as to be decoded into a data unit. [00073] Video decoding equipment 200 can obtain information about at least one encoding unit that generates the minimum encoding error when encoding is performed recursively for each maximum encoding unit, and can use the information to decode the current image. In other words, the image data can be decoded in the optimal encoding unit in each maximum encoding unit. Consequently, even if the image data has high resolution and a large amount of data, the image data can be decoded and restored efficiently using an encoding unit size and encoding mode, which are determined accordingly. adaptively according to the characteristics of the image data, using information about an optimal encoding mode received from an encoder. [00074] Figure 3 is a diagram to describe a concept of encoding units according to an exemplary embodiment. [00075] Referring to Figure 3, a size of an encoding unit can be expressed in which width x height, can be 64x64, 32x32, 16x16, 8x8 and 4x4. Except the encoding unit having square format, the encoding unit can have a size of 64x32, 32x64, 32x16, 16x32, 16x8, 8x16, 8x4 or 4x8. [00076] In 310 video data, a resolution is 1920x1080, a maximum size of an encoding unit is 64, and a maximum depth is 2. In 320 video data, a resolution is 1920x1080, a maximum size encoding is 64, and a maximum depth is 4. At 330 video data, a resolution is 352x288, a maximum size of an encoding unit is 16, and a maximum depth is 2. [00077] If the resolution is high or the amount of data is large, a maximum size of an encoding unit can be so large as to not only increase the encoding efficiency, but also accurately reflect the characteristics of an encoding unit. Image. Accordingly, the maximum size of encoding unit of video data 310 and 320 having higher resolution than video data 330 can be 64. [00078] Maximum depth denotes a total number of divisions from a maximum encoding unit to a minimum encoding unit. Accordingly, as the maximum depth of video data 310 is 2, encoding units 315 of video data 310 may include a maximum encoding unit having a long axis size of 64, and encoding units having long axis sizes of 32 and 16 as the depths are deepened to two layers by twice dividing the maximum encoding unit. However, since the maximum depth of video data 330 is 2, encoding units 335 of video data 330 may include a maximum encoding unit having a long axis size of 16, and encoding units having an axis size of 16. along 8 and 4 as the depths are deepened by two layers by dividing the maximum encoding unit by two. [00079] As the maximum depth of video data 320 is 4, encoding units 325 of video data 320 may include a maximum encoding unit having a long axis size of 64, and encoding units having axis sizes along 32, 16, 8 and 4 as the depths are deepened by 4 layers by dividing the maximum encoding unit three times. As depth is deepened, detailed information can be accurately expressed. [00080] Figure 4 is a block diagram of an image encoder 400 based on encoding units, according to an exemplary embodiment. [00081] Referring to Figure 4, an intra predictor 410 performs intra prediction on encoding units in an intra mode, among the encoding units of a current frame 405, and a motion estimator 420 and a motion compensator 425 perform estimation inter and motion compensation in the encoding units in an inter mode among the encoding units of the current frame 405 using the current frame 405, and a reference frame 495. [00082] The data output from the intra predictor 410, the motion estimator 420, and the motion compensator 425 is output as a quantized transform coefficient through a transformer 430 and a quantizer 440. The quantized transform coefficient is restored as data in a spatial domain through an inverse quantizer 460 and an inverse transformer 470, and the data restored in the spatial domain is output as the reference frame 495 after being post-processed through an unlocking unit 480 and a network filtering unit 490. The quantized transform coefficient may be output as a bit stream 455 through an entropy encoder 450. [00083] In order for the image encoder 400 to be applied to the video encoding equipment 100, all elements of the image encoder 400, i.e. the intra predictor 410, the motion estimator 420, the motion compensator 425, the transformer 430, quantizer 440, entropy encoder 450, inverse quantizer 460, inverse transformer 470, deblocking unit 480, and network filtering unit 490 perform image encoding processes based on the maximum encoding unit , the depth encoding unit, the prediction unit, and the transform unit. Specifically, the intra predictor 410, the motion estimator 420, and the motion compensator 425 determine a prediction unit and a prediction mode of an encoding unit upon consideration of a maximum size and depth of the encoding unit, and the transformer 430 determines the size of the transform unit by considering the maximum size and depth of the encoding unit. Furthermore, as described later, the intra predictor 410 performs intra prediction by applying an intra prediction mode determined for a luminance component encoding unit to a chrominance component encoding unit, and thus efficiency can be improved. Chrominance Component Encoding Unit Prediction. [00084] Figure 5 is a block diagram of a picture decoder 500 based on encoding units, according to an exemplary embodiment. [00085] Referring to Figure 5, an analyzer 51 analyzes a received bit stream 505 and extracts the encoded image data to be decoded and encoding information required to decode from the analyzed bit stream. The encoded image data is output as inverse quantized data through an entropy decoder 520 and an inverse quantizer 530, and the inverse quantized data is restored to image data in a spatial domain through an inverse transformer 540. An intra predictor 550 performs intra prediction on the decoding units in an intra mode with respect to image data in the spatial domain, and a motion compensator 560 performs motion compensation on the encoding units in an inter mode using a reference frame 585. The image data in the spatial domain, which have passed through the intra predictor 550 and the motion compensator 560, may be output as a restored frame 595 after being post-processed through an unblocking unit 570 and a loop filtering unit 580. Further , the image data that is post-processed through the unlocking unit 570 and the loop filtering unit 580 can be issued as the 585 reference frame. [00086] In order for the picture decoder 500 to be applied to the video decoding equipment 200, all elements of the picture decoder 500, i.e. the analyzer 510, the entropy decoder 520, the inverse quantizer 530, the inverse transformer 540, the intra predictor 550, the motion compensator 560, the deblocking unit 570, and the loop filtering unit 580 perform image decoding processes based on the maximum encoding unit, in the encoding unit according to the depths , in the prediction unit, and in the transform unit. Specifically, intra prediction 550 and motion compensator 560 perform operations based on partitions and a prediction mode for each of the encoding units having a tree structure, and inverse transformer 540 perform operations based on a size of one transformation unit for each encoding unit. [00087] Figure 6 is a diagram illustrating deeper encoding units according to depths, and partitions, according to an exemplary embodiment. [00088] Video encoding equipment 100 and video decoding equipment 200 use hierarchical encoding units in order to account for the characteristics of an image. A maximum height, a maximum width, and a maximum depth of the encoding units can be adaptively determined according to the image characteristics, or they can be adjusted differently by a user. Deeper encoding unit sizes, according to depths, can be determined according to the predetermined maximum encoding unit size. [00089] In a hierarchical structure 600 of encoding units, according to an exemplary embodiment, the maximum height and maximum width of the encoding units are individually 64, and the maximum depth is 4. along a vertical axis of the hierarchical structure 600, a height and width of the deepest encoding unit are individually divided. In addition, a prediction unit and the partitions, which are the basis for encoding prediction of each deeper encoding unit, are shown along a horizontal axis of the hierarchical structure 600. [00090] In other words, an encoding unit 610 is a maximum encoding unit in the hierarchical structure 600, where a depth is 0 and a size, that is, a height per width, is 64x64. The depth is deepened along the vertical axis, and there is an encoding unit 620, having a size of 32x32, and a depth of 1; an encoding unit 630, having a size of 16x16, and a depth of 2; a coding unit 640, having a size of 8x8, and a depth of 3; and an encoding unit 650 having a size of 4x4 and a depth of 4. Encoding unit 650 having a size of 4x4 and a depth of 4 is a minimum encoding unit. [00091] Partial data units are shown in Figure 6 as predictive units of an encoding unit along the horizontal axis according to each depth. In other words, if the coding unit 610, having the size of 64x64, and the depth of 0, is a prediction unit, the prediction unit can be divided into partitions included in the coding unit 610, that is, a partition 610 having a size of 64x64, partitions 612 having a size of 64x32, partitions 614 having a size of 32x64, or partitions 616 having a size of 32x32. [00092] A prediction unit of encoding unit 620 having a size of 32x32 and a depth of 1 can be divided into partitions included in encoding unit 620, i.e., a partition 620 having a size of 32x32, partitions 622 having a size of 32x16, partitions 624 having a size of 16x32, and partitions 626 having a size of 16x16. [00093] A prediction unit of encoding unit 630 having a size of 16x16 and a depth of 2 can be divided into partitions included in encoding unit 630, i.e. a partition having a size of 16x16 included in encoding unit 630 , partitions 632 having a size of 16x8, partitions 634 having a size of 8x16, and partitions 636 having a size of 8x8. [00094] A prediction unit of encoding unit 640 having a size of 8x8 and a depth of 3 can be divided into partitions included in encoding unit 640, i.e. a partition having a size of 8x8 included in encoding unit 640 , partitions 642 having a size of 8x4, partitions 644 having a size of 4x8, and partitions 646 having a size of 4x4. [00095] Encoding unit 650 having a size of 4x4 and a depth of 4 is the minimum encoding unit and an encoding unit of the lowest depth. A prediction unit of encoding unit 650 is assigned only to a partition having a size of 4x4. [00096] To determine the at least one coded depth of the coding units constituting the maximum coding unit 610, the coding unit determiner 120 of the video coding equipment 100 performs coding for the coding units corresponding to each depth included in the maximum encoding unit 610. [00097] A number of deeper encoding units, according to depths including data in the same range and in the same size, increases as the depth is deepened. For example, four encoding units corresponding to a depth of 2 are required to cover the data that is included in one encoding unit corresponding to a depth of 1. Accordingly, to compare encoding results of the same data according to depths , the encoding unit corresponding to the depth of 1; and four encoding units corresponding to the depth of 2; are individually coded. [00098] To perform coding for a current depth between depths, a minimum coding error can be selected for the current depth by performing coding for each prediction unit in the coding units corresponding to the current depth, along the horizontal axis of the hierarchical structure 600. In addition, a minimum coding error can be searched by comparing the minimum coding errors according to depths by performing coding for each depth as the depth is deepened along the vertical axis of the structure hierarchical 600. A depth and a partition having the minimum encoding error in encoding unit 610 can be selected as the encoded depth and a partition type of encoding unit 610. [00099] Figure 7 is a diagram for describing a relationship between an encoding unit 710 and transforming units 720, according to an exemplary embodiment. [000100] Video encoding equipment 100 or 200 encodes or decodes an image according to encoding units having sizes less than or equal to one maximum encoding unit for each maximum encoding unit. Transform unit sizes for transformation during encoding can be selected based on data units that are not larger than a corresponding encoding unit. For example, in video encoding equipment 100 or 200, if a size of the encoding unit 710 is 64x64, the transformation can be performed using transforming units 720 having a size of 32x32. Furthermore, data from encoding unit 710 having the size of 64x64 can be encoded by performing the transformation on each of the transforming units having the size of 32x32, 16x16, 8x8, and 4x4, which are smaller than 64x64, and then a transform unit having the minimum coding error can be selected. [000101] Figure 8 is a diagram for describing encoding information of encoding units corresponding to an encoded depth, according to an exemplary embodiment. [000102] Output unit 130 of video encoding equipment 100 can encode and transmit information 800 about a partition type, information 810 about a prediction mode, and information 820 about a size of a transform unit for each unit of encoding corresponding to an encoded depth, as information about an encoding mode. [000103] Information 800 includes information about a format of a partition obtained by dividing a prediction unit from a current encoding unit, wherein the partition is a data unit for predicting encoding from the current encoding unit. For example, a current encoding unit CU_0 having a size of 2Nx2N can be split into any one of a partition 802 having a size of 2Nx2N, an 804 partition having a size of 2NxN, a partition of 806 having a size of Nx2N, and an 808 partition having a size of NxN. Here, information 800 about a partition type is set to indicate one of partition 804 having a size of 2NxN, partition 806 having a size of Nx2N, and partition 808 having a size of NxN. [000104] Information 810 indicates a prediction mode for each partition. For example, information 810 may indicate a prediction encoding mode performed on a partition indicated by information 800, i.e., an intra mode 812, an inter mode 814, or a hop mode 816. [000105] Information 820 indicates a unit of transformation to be based on when the transformation is performed on a current encoding unit. For example, the transform unit can be a first intra transform unit 822, a second intra transform unit 824, a first inter transform unit 826, or a second intra transform unit 828. [000106] The coding information extractor 220 of the video decoding equipment 200 can extract and use the information 800, 810 and 820 for decoding, according to each deeper coding unit. [000107] Figure 9 is a diagram of deeper coding units according to depths, according to an exemplary embodiment. [000108] Split information can be used to indicate a change of a depth. The division information indicates whether an encoding unit of a current depth is divided into encoding units of a lower depth. [000109] A prediction unit 910 for encoding prediction of an encoding unit having a depth of 0 and a size of 2N_0x2N_0 may include partitions of a partition type 912 having a size of 2N_0x2N_0, a partition type 914 having a size of 2N_0xN_0, a partition type 916 having a size of N_0x2N_0, and a partition type 918 having a size of N_0xN_0. [000110] Encoding via motion prediction is performed repeatedly on one partition having a size of 2N_0x2N_0, two partitions having a size of 2N_0xN_0, two partitions having a size of N_0x2N_0, and four partitions having a size of N_0xN_0, according to each partition type. Prediction in an intra mode and in an inter mode can be performed on partitions having sizes of 2N_0x2N_0, N_0x2N_0, 2N_0xN_0 and N_0xN_0. Motion prediction in a jump mode is performed only on the prediction unit having the size of 2N_0x2N_0. [000111] If the encoding error is the smallest in division type 918, a depth is changed from 0 to 1 to divide partition type 918 in operation 920, and encoding is performed repeatedly in encoding units 922, 924, 926, and 928 having a depth of 2 and a size of N_0xN_0 to look for a minimal coding error. [000112] As coding is performed repeatedly on coding units 922, 924, 926 and 928 having the same depth, only coding of a coding unit having a depth of 1 will be described as an example. A prediction unit 930 for predicting motion of an encoding unit having a depth of 1 and a size of 2N_1x2N_1(=N_0xN_0) may include a split type 932 having a size of 2N_1x2N_1, a split type 934 having a size of 2N_1xN_1, a split type 936 having a size of N_1x2N_1, and a split type 938 having a size of N_1xN_1. Encoding via motion prediction is performed repeatedly on one prediction unit having a size of 2N_1x2N_1, two prediction units having a size of 2N_1xN_1, two prediction units having a size of N_1x2N_1, and four prediction units having a size of N_1x2N_1. of N_1xN_1, according to each type of division. [000113] If an encoding error is the smallest in partition type 938 having the size of N_1Xn_1, a depth is changed from 1 to 2 to divide division type 938 in operation 940, and encoding is performed repeatedly in units of encoding 942, 944, 946, and 948, which have a depth of 2 and a size of N_2xN_2 to look for minimal encoding error. [000114] When a maximum depth is d, the split operation according to each depth can be performed until when a depth becomes d-1. In other words, a prediction unit 950 for predicting motion of an encoding unit having a depth of d-1 and a size of 2N_(d-1)x2N_(d-1) may include a division type 952 having a size of 2N_(d-1)x2N_(d-1), a division type 954 having a size of 2N_(d-1)xN_(d-1), a division type 956 having a size of N_(d-1 )x2N_(d-1), and a division type 958 having a size of N_(d-1)xN_(d-1). [000115] Encoding via motion prediction can be performed repeatedly on one prediction unit having a size of 2N_(d-1)x2N_(d-1), two prediction units having a size of 2N_(d-1 )xN_(d-1), two prediction units having a size of N_(d-1)x2N_(d-1) and four prediction units having a size of N_(d-1)xN_(d-1), according to each type of division. Since the maximum depth is d, a coding unit 952 having a depth of d-1 is not divided. [000116] To determine the coded depth for the coding unit 912, the video coding equipment 100 selects a depth having the smallest coding error by comparing the coding errors according to the depths. For example, an encoding error of an encoding unit having a depth of 0 can be encoded by performing motion prediction on each of division types 912, 914, 916, and 918, and then a prediction unit having the error minimum encoding can be determined. Similarly, a prediction unit having minimal coding error can be searched, according to depths 0 to d-1. At a depth of d, a coding error can be determined by performing motion prediction on prediction unit 960 having the size of 2N_dx2N_d. As such, the minimum encoding errors according to depths are compared across all depths from 1 to d, and a depth having the minimum encoding error can be determined as an encoded depth. The encoded depth, the prediction unit partition type, and the prediction mode can be encoded and transmitted as information about an encoding mode. In addition, as an encoding unit is divided from a depth of 0 to an encoded depth, only the division information of the encoded depth is set to 0, and the division information of the depths excluding the encoded depth is set to 1. . [000117] The encoding information and picture data extractor 220 of the video decoding equipment 200 can extract and use the encoded depth information and the prediction unit of the encoding unit 900 to decode the partition 912. Video decoding 200 can determine a depth, at which the split information is 0, as an encoded depth using the split information according to the depths, and use the information about a corresponding depth encoding mode for decoding. [000118] Figures 10A to 10B are diagrams for depicting a relationship between encoding units 1010, prediction units 1060, and transforming units 1070, according to an exemplary embodiment. [000119] Encoding units 1010 are encoding units corresponding to the encoded depths determined by the video encoding equipment 100, in a maximum encoding unit 1000. The prediction units 1060 are partitions of the prediction units of each of the encoding units. encoding 1010, and transforming units 1070 are transforming units of each of encoding units 1010. [000120] When a maximum encoding unit depth is 0 in encoding units 1010, the depths of encoding units 1012 and 1054 are 1 depending on the depths of encoding units 1014, 1016, 1018, 1028, 1050 and 1052 are 2, the coding unit depths 1020, 1022, 1024, 1026, 1030, 1032, and 1048 are 3, and the coding unit depths 1040, 1042, 1044, and 1046 are 4. [000121] In the prediction units 1060, some encoding units 1014, 1016, 1022, 1032, 1048, 1050, 1052 and 1054 are obtained by dividing the encoding units into the encoding units 1010. In other words, the partition types in the Encoding units 1014, 1022, 1050 and 1054 have a size of 2NxN, partition types in encoding units 1016, 1048 and 1052 have a size of Nx2N, and a partition type of encoding unit 1032 has a size of NxN. Prediction units and partitions of encoding units 1010 are less than or equal to each encoding unit. [000122] The transformation or inverse transformation is performed on the image data of the encoding unit 1052 in the transforming units 1070 in a data unit that is smaller than the encoding unit 1052. In addition, the encoding units 1014, 1016 , 1022, 1032, 1048, 1050 and 1052 in transforming units 1070 are different from those in prediction units 1060 in terms of sizes and shapes. In other words, video encoding and decoding apparatus 100 and 200 can perform intra prediction, motion estimation, motion compensation, transformation, and inverse transformation individually on a data unit in the same encoding unit. [000123] Figure 11 is a table showing encoding information according to encoding units, according to an exemplary embodiment. [000124] The encoding information output unit 140 of the video encoding equipment 100 can encode the encoding information according to the encoding units, and the encoding information extractor 220 of the video encoding equipment 200 can extract the coding information according to the coding units. [000125] Encoding information may include division information about an encoding unit, division type information, prediction mode information, and information about a size of a transform unit. The encoding information shown in Figure 11 is simply exemplary of information that can be set by video encoding equipment 100 and video decoding equipment 200, and is not limited thereto. [000126] The division information may indicate an encoded depth of a corresponding encoding unit. In other words, since an encoded depth is a depth that is no longer divided according to the division information, information about the division type, prediction mode, and transform unit size can be set for the encoded depth. If the current encoding unit is further divided according to the division information, encoding is performed independently into four division encoding units of a lower depth. [000127] Information about a division type can indicate a division type of a transform unit of an encoding unit at a coded depth as one of 2Nx2N, 2NxN, Nx2N and NxN. The prediction mode can indicate a motion prediction mode such as an intra mode, an inter mode, and a jump mode. Intra mode can only be set in 2Nx2N and NxN split types, and jump mode can be set only in 2Nx2N split type. The transform unit can have two sizes in intra mode, and two sizes in inter mode. [000128] Encoding information according to encoding units in the encoded depth can be included in the minimum encoding unit in the encoding unit. Accordingly, by checking the coding information included in the neighboring minimum coding units, it can be determined whether the neighboring minimum coding units are included in coding units having the same encoded depth. Furthermore, as the coding unit of the corresponding coding depth can be determined using the coding information included in the minimum coding unit, the distribution of the coding depths of the minimum coding units can be inferred. [000129] Intra prediction performed by the intra prediction unit 410 of the video coding equipment 100 illustrated in Figure 4 and by the intra prediction unit 550 of the video coding equipment 200 illustrated in Figure 5 will now be described in detail. In the following description, an encoding unit refers to an encoded block current in an image encoding process, and an encoding unit refers to a current decoding block in an image decoding process. The encoding unit and the decoding unit are only different in that the encoding unit is used in the encoding process and the decoding unit is used in the decoding process. For consistency, except for a specific case, the encoding unit and the decoding unit are referred to as an encoding unit in both encoding and decoding processes. [000130] Figures 12A to 12C are diagrams of formats of a luminance component image and a chrominance component image, according to an exemplary embodiment. [000131] Each encoding unit forming a frame can be expressed using one of three components, ie Y, Cb and Cr. Y are the luminance data that have luminance information, and Cb and Cr are the chrominance data that have chrominance information. [000132] Chrominance data can be expressed using a smaller amount of data than luminance data, based on the premise that a person is generally more sensitive to luminance information than to chrominance information. Referring to Figure 12A, an encoding unit having a 4:2:0 format includes luminance data 1210 having a size of HxW (H and W are positive integers), and two chrominance data fragments 1220 and 1230 having a size of (H/2)x(W/2) obtained by sampling the chrominance components Cb and Cr by 1/4. Referring to Figure 12B , an encoding unit having a 4:2:2 format includes luminance data 1240 which has a size of HxW (H and W are positive integers), and two chrominance data fragments 1250 and 1260 having a size of Hx(W/2) obtained by sampling the chrominance components Cb and Cr by 1/2 in a horizontal direction. Furthermore, with reference to Figure 12C, when an encoding unit has a 4:4:4 format, the encoding unit includes luminance data 1270, and chrominance data 1280 and 1290, each having a size of HxW without sampling the chrominance components Cb and Cr to accurately express a chrominance component image. [000133] Next, it is assumed that the luminance component encoding unit and the chrominance component encoding unit, which are intra predicted, constitute one of the picture signals that have 4:2 color formats: 0, 4:2:2 and 4:4:4 defined in a YCbCr (or YUV) color domain. [000134] The prediction efficiency of the chrominance encoding unit is improved by including an intra prediction mode determined for the luminance component encoding unit in the intra prediction candidate modes for the chrominance component encoding unit upon consideration of a relationship between the luminance component and the chrominance component. [000135] Figure 13 is a table showing a number of intra prediction modes according to the sizes of luminance component encoding units, according to an exemplary embodiment. [000136] According to an exemplary embodiment, the number of intra prediction modes to be applied to a luminance component encoding unit (a encoding unit in a decoding process) can be variably set. For example, with reference to Figure 13, if the size of a luminance component encoding unit is NxN, on which intra prediction is performed, the numbers of intra prediction modes actually performed in the luminance component encoding units of size 2x2, 4x4, 8x8, 16x16, 32x32, 64x64 and 128x128 can be respectively set to 5, 9, 9, 17.33, 5 and 5 (in Example 2). For another example, when a size of a luminance component encoding unit to be intra predicted is NxN, numbers of intra prediction modes to be effectively performed in the encoding units having sizes of 2x2, 4x4, 8x8, 16x16, 32x32, 64x64 and 128x128 can be set to 3, 17, 34, 34,34, 5 and 5. The numbers of intra prediction modes to be performed effectively are set differently according to the sizes of the luminance component encoding units because the extra codes for coding prediction mode information differ according to the sizes of the luminance component coding units. In other words, a small luminance component coding unit occupies a small portion of the entire image data, but it may have extra, large code to convey additional information, such as the prediction mode information of the luminance component coding unit. luminance. Consequently, if a small luminance component encoding unit is encoded using an excessively large number of prediction modes, the number of bits may be increased and thus the compression efficiency may be reduced. Furthermore, the luminance component wide encoding unit, for example, a luminance component encoding unit equal to or greater than 64x64, generally corresponds to a flat region of the image data, and thus the encoding unit encoding of luminance component, wide by using an excessively large number of prediction modes can also reduce the compression efficiency. [000137] Thus, according to an exemplary embodiment, the encoding units are roughly classified into at least three sizes, such as N1xN1 (where 2=N1=4, and N1 is an integer), N2xN2 (where 8=N2 =32 and N2 is an integer), and N3xN3 (where 64=N3 and N3 is an integer). If the number of intra prediction modes performed on coding units of N1xN1 is A1 (where A1 is a positive integer), the number of intra prediction modes performed on coding units of N2xN2 is A2 (where A2 is an integer positive), and the number of intra prediction modes performed in the coding units of N3xN3 is A3 (where A3 is a positive integer), the numbers of intra prediction modes performed according to the sizes of the coding units can be set to satisfy A3=A1=A2. That is, if a current image is divided into small coding units, medium coding units, and large coding units, the medium coding units can be set to have the greatest number of prediction modes and the coding units small and large coding units can be set to have a relatively small number of prediction modes. However, the exemplary modality is not limited to this, and the luminance component encoding units, small and large, can also be set to have a large number of prediction modes. The numbers of prediction modes according to the sizes of the encoding units in Figure 13 are shown exemplary and can be changed. [000138] Figure 14A is a table showing intra prediction modes applied to a luminance component encoding unit having a predetermined size, according to an exemplary embodiment. [000139] Referring to Figures 13 and 14A, for example, when intra prediction is performed on a luminance component encoding unit having a 4x4 size, a vertical mode (mode 0), the encoding unit may have a horizontal mode (mode 1); a direct current (DC) mode (mode 2); a diagonal descending-left mode (mode 3); a diagonal descending-right mode (mode 4); a straight-upright mode (mode 5); a descending-horizontal mode (mode 6); a left-vertical mode (mode 7); and a horizontal-upward mode (mode 8). [000140] Figure 14B illustrates the directions of the intra prediction modes shown in Figure 14A. In Figure 14B, numbers at the ends of the arrows represent prediction modes corresponding to the prediction directions indicated by the arrows. Here, mode 2 is a CD mode having no directivity and so is not shown in Figure 16B. [000141] Fig. 14C is a diagram for describing a method of performing intra prediction on a luminance component encoding unit using the intra prediction modes shown in Fig. 14A, according to an exemplary embodiment. [000142] Referring to Figure 14C, a prediction coding unit is generated by performing an intra, available prediction mode, determined according to the size of a current coding unit using neighboring pixels from A to M of the unit current encoding. For example, an operation of performing prediction coding will be described on an actual coding unit having a size of 4x4 according to mode 0, i.e. a vertical mode, shown in Figure 14A. Initially, values of neighboring pixels A through D on an upper side of the current encoding unit are predicted as pixel values of the current encoding unit. That is, the value of neighbor pixel A is predicted as a value of four pixels in a first column of the current encoding unit, the value of neighbor pixel B is predicted as a value of four pixels in a second column of the current encoding unit , the value of neighboring pixel C is predicted as a value of four pixels in a third column of the current encoding unit, and the value of neighboring pixel D is predicted as a value of four pixels in a fourth column of the current encoding unit. After that, the predicted current coding unit pixel values using neighboring pixels A to D are subtracted from the original current coding unit pixel values in order to calculate an error value and then the error value is encoded. . [000143] Figure 15 is a diagram for explaining intra prediction modes applied to a luminance component encoding unit having a predetermined size, according to an exemplary embodiment. [000144] Referring to Figures 13 and 15, for example, when intra prediction is performed on an encoding unit having a size of 2x2, the encoding unit may have in total five modes such as a vertical mode, a horizontal mode, a CD mode, a flat mode, and a right descending, diagonal mode. [000145] However, if an encoding unit having a size of 32x32 has 33 intra prediction modes, as shown in Figure 13, directions of the 33 intra prediction modes need to be established. According to an exemplary embodiment, to establish the intra prediction mode having several directions in addition to the intra prediction modes illustrated in Figures 14 and 15, the prediction directions for selecting neighboring pixels used as reference pixels of the encoding unit pixels , are established using (dx, dy) parameters. For example, if each of the 33 prediction modes is set to N mode (where N is an integer from 0 to 32), mode 0 can be set to vertical mode, mode 1 can be set to horizontal mode , mode 2 can be set to a DC mode, mode 3 can be set to a flat mode, and each of mode 4 to mode 31 can be set to a prediction mode having a directivity of tan-1( dy/dx) by using (dx, dy) represented as one of (1, -1), (1,1), (1,2), (2,1),(1,-2), (2, 1), (1,-2), (2,-1), (2,-11), (5,-7), (10,-7), (11,3), (4,3), (1,11), (1,-1), (12,-3), (1,-11), (1,-7), (3.10), (5,-6), (7, -6), (7, -4), (11.1), (6.1), (8.3), (5.3), (5.7), (2.7), (5, -7), and (4,-3) shown in Table 1. [000146] Mode 32 can be established as a bilinear mode that uses bilinear interpolation as will be described later with reference to Figure 16. [000147] Figures 16A through 16C are reference diagrams for explaining intra prediction modes of a luminance component encoding unit having various directionalities, according to an exemplary embodiment. [000148] As described above with reference to Table 1, each of the intra prediction modes according to the exemplary embodiments can have directionality of tan-1(dy/dx) by using a plurality of parameters (dx, dy). [000149] Referring to Figure 16A, neighboring pixels A and B in a line 160 extending from a current pixel P in a current luminance component encoding unit, which is to be predicted, at an angle of tan -1(dy/dx) determined by a value of a parameter (dx, dy) according to a mode, as shown in Table 1, can be used as predictors of the current pixel P. In this case, neighboring pixels A and B may be pixels that have been encoded and restored, and belong to previous encoding units located above and to the left of the current encoding unit. Furthermore, when line 160 does not pass along neighboring pixels at locations individually having an integral value, but passes between those neighboring pixels, the nearest neighboring pixels of line 160 can be used as predictors of the current pixel P. Furthermore, a weighted average value considering a distance between an intersection of line 160 and neighboring pixels close to line 160 can be used as a predictor for the current pixel P. If two pixels that meet at line 160, for example, neighbor pixel A located above the current pixel P and the neighbor pixel B located on the left side of the current pixel P, are present, an average of pixel values of the neighboring pixels A and B can be used as a predictor of the current pixel P. Otherwise, if a product of the values of the parameters dx and dy is a positive value, the neighboring pixel A can be used, and if the product of the values of the parameters dx and dy is a negative value, the neighboring pixel B can be used. [000150] Figures 16B and 16C are reference diagrams for explaining a process of generating a predictor when line 160 of Figure 16A passes between, not through, neighboring pixels of integer locations. [000151] Referring to Figure 16B, if line 160 has an angle of tan-1(dy/dx) which is determined according to (dx, dy) of each mode passing between a neighboring pixel A 161 and a neighboring pixel B 162 of integer locations, a weighted average value considering a distance between an intersection of the extended line 160 and neighboring pixels A 161 and B 162 near the extended line 160 can be used as a predictor as described above. For example, if a distance between neighboring pixel A 161 and the intersection of the extended line 160 having the angle of tan-1(dy/dx) is f, and the distance between the neighboring pixel B 162 and the intersection of the extended line 160 for g, a predictor for the current pixel P can be obtained as (A*g+B*f)/(f+g). Here, f and g can individually be a normalized distance using an integer. If software or hardware is used, the predictor for the current pixel P can be obtained via a shift operation like (g*A+f*B+2)>>2. As shown in Figure 16B, if extended line 160 passes through a first quarter near neighbor pixel A 161 out of four parts obtained by dividing by four a distance between neighbor pixel A 161 and neighbor pixel B 162 of the number locations integer, the predictor for the current pixel P can be acquired as (3*A+B)/4. Such an operation can be performed through the change operation considering rounding to the nearest integer such as (3*A+B+2)>>2. [000152] However, if the extended line 160 having the angle of tan-1(dy/dx) which is determined according to (dx, dy) of each mode passes between neighboring pixel A 161 and neighbor pixel B 162 of the integer locations, a section between neighboring pixel A 161 and neighboring pixel B 162 can be divided into a predetermined number of areas, and a weighted average value considering a distance between the intersection and the neighboring pixel A 161 and the neighboring pixel B 162 in each divided area can be used as a prediction value. For example, with reference to Figure 16C, a section between neighboring pixel A 161 and neighboring pixel B 162 can be divided into five sections P1 to P5, as shown in Figure 16C, a representative weighted average value considering a distance between an intersection and a neighbor pixel A161 and neighbor pixel B 162 in each section can be determined, and the representative weighted average value can be used as a predictor for the current pixel P. In detail, if the extended line 160 passes through the section P1, a value of the neighboring pixel A can be determined as a predictor for the current pixel P. If the extended line 160 passes through the section P2, a weighted average value (3*A+1*B+2)>>2 considering a distance between neighboring pixels A and B and a midpoint of section P2 can be determined as a predictor for the current pixel P. If the extended line 160 passes through section P3, a weighted average value (2*A+2*B+ 2)>>2 considering a distance between neighboring pixels A and B and a point the average of section P3 can be determined as a predictor for the current pixel P. If the extended line 160 passes through section P4, a weighted average value (1*A+3*B+2)>>2 considering a distance between neighboring pixels A and B and a midpoint of section P4 can be determined as a predictor for the current pixel P. If extended line 160 passes through section P5, a value of neighboring pixel B can be determined as a predictor for the current pixel P. [000153] Furthermore, if two neighboring pixels, that is, neighbor pixel A on the upper side and neighbor pixel B on the left side, meet the extended line 160, as shown in Figure 16A, an average value of the neighbor pixel A and the neighbor pixel B can be used as a predictor for the current pixel, or if (dx*dy) is a positive value, the neighbor pixel A on the upper side can be used, and if (dx*dy) is a negative value, the neighbor pixel B on the left side can be used. [000154] The intra prediction modes having various directionality shown in Table 1 can be predetermined via an encoding side and a decoding side, and only an index of an intra prediction mode of each encoding unit can be transmitted . [000155] Figure 17 is a reference diagram for explaining a bilinear mode according to an exemplary embodiment. [000156] With reference to Figure 17, in bilinear mode, a geometric mean is calculated by considering a value of a current pixel P 170 in a current luminance component encoding unit, which must be predicted, pixel values in the upper, lower, left, and right limits of the current luminance component encoding unit, and the distances between the current pixel T 170 and the upper, lower, left, and right limits of the current luminance component encoding unit. The geometric mean is then used as a predictor of the current pixel P 170. For example, in bilinear mode, the geometric mean calculated using a virtual pixel A 171, a virtual pixel B 172, a pixel D 176 and a pixel E 177 located at the upper, lower, left and right sides of the current pixel P 170, and the distances between the current pixel P 170 and the upper, lower, left and right limits of the current luminance component encoding unit, is used as a pixel predictor current P 170. As the bilinear mode is one of the intra prediction modes, neighboring pixels that have been encoded and restored, and belong to the previous luminance component encoding units, are used as reference pixels for prediction. Thus, pixel values in the current luminance component encoding unit are not used, but virtual pixel values calculated using neighboring pixels located on the top and left sides of the current luminance component encoding unit are used as the pixel A 171 and pixel B 172. [000157] Specifically, first, a value of a virtual pixel C 173 at a lower rightmost point of the current luminance component encoding unit is calculated by averaging the values of a neighboring pixel (right pixel - top) 174 at an upper rightmost point of the current luminance component encoding unit and a neighboring pixel (left - lower pixel) 175 at the lower leftmost point of the current luminance component encoding unit, as shown Equation 1 below: Equation 1C=ü. 5 (LeftDownPixel+RightUpPi xel) [000158] The following is a value of the virtual pixel A 171 located at a lower limit of the current luminance component encoding unit when the current pixel P 170 is extended downwards by considering the distance W1 between the current pixel P 170 and the left limit of the current luminance component encoding unit and the distance W2 between the current pixel P 170 and the right limit of the current luminance component encoding unit, is calculated using Equation 2 below. Equation 2A= (C*W 1 +LeftDown Pixel* W2)/(W 1+W2);A=(C*Wl+LeftDownPixel*W2+((Wl+W2)/2))/(Wl+W2)W [000159] When a value of W1 + W2 in Equation 2 is a power 2, such as 2An,A=(C*W1+LeftDownPixel*W2+((W1+W2)/2) )/(W1+W2) can be calculated by operation change asA=(C*W1+LeftDownPixel*W2+2A(n-1))>>n without division. [000160] Similarly, a value of the virtual pixel B 172 located at a rightmost limit of the current luminance component decoding unit when the current pixel P 170 is extended in the right direction by considering the distance h1 between the current pixel P 170 and the upper limit of the current luminance component encoding unit and the distance h2 between the current pixel P 170 and the lower limit of the current luminance component encoding unit, is calculated using Equation 3 below: Equation 3B=(C *h 1 +RightUpPixel*h2)/(h 1+112)B=(C*hl+RightUpPixel*h2+((hl+h2)/2))/(hl+h2) [000161] When a value of h1+h2 in Equation 3 is a power of 2, such as 2Am, B=(C*h1+RightUpPixel*h2+((h1+h2)/2))/(h1+h2) can be calculated by shift operation as B=(C*h1+RightUpPixel*h2+2A(m-1))>>m without division. [000162] When the values of the virtual pixel B 172 at the right edge and the virtual pixel A 171 at the bottom edge of the current pixel P 170 are determined using Equations 1 to 3, a predictor for the current pixel P 170 can be determined using using an average value of A+B+D+E. In detail, a weighted average value considering a distance between the current pixel P 170 and the virtual pixel A 171, the virtual pixel B 172, the pixel D 176, and the pixel E 177 or an average value of A+B+D +E can be used as a predictor for the current pixel P 170. For example, if a weighted average value is used and the block size is 16x16, a predictor for the current pixel P can be obtained as (h1*A+h2 *D+W1*B+W2*E+16)>>5. Such a bilinear prediction is applied to all pixels of the current encoding unit, and a prediction encoding unit of the current encoding unit in a bilinear prediction mode is generated. [000163] According to an exemplary embodiment, the prediction coding is performed according to various intra prediction modes, determined according to the size of a luminance component coding unit, thereby enabling efficient video compression based on on the characteristics of an image. [000164] As a greater number of intra prediction modes are used than intra prediction modes used in a conventional codec, according to a size of a coding unit according to an exemplary modality, codec compatibility conventional can become a problem. In a conventional technique, a maximum of 9 intra prediction modes can be used as shown in Figures 14A and 14B. Consequently, it is necessary to map intra prediction modes that have several directions selected according to an exemplary modality to one of a smaller number of intra prediction modes. That is, when the number of intra prediction modes available from a current encoding unit is N1 (N1 is an integer), to make the intra prediction modes available from the current encoding unit compatible with an encoding unit of a predetermined size including N2 (N2 is an integer different from N1) intra prediction modes, the intra prediction modes of the current encoding unit can be mapped to an intra prediction mode that has a more similar direction among the N2 prediction modes intra prediction. For example, a total of 33 intra prediction modes are available as shown in Table 1 in the current encoding unit, and it is assumed that an intra prediction mode finally applied to the current encoding unit is mode 14, i.e. ( dx,dy)=(4,3), having a directivity of tan-1(3/4)/36.87 (degrees). In this case, to equate the intra prediction mode applied to the current block with one of the 9 intra prediction modes as shown in Figures 14A and 14B, mode 4 (down_right) having a more similar directivity with the directivity of 36.87 ( degrees) can be selected. That is, mode 14 of Table 1 can be mapped to mode 4 shown in Figure 14B. Similarly, if an intra prediction mode applied to the current encoding unit is selected to be mode 15, that is, (dx,dy)=(1,11), out of the 33 unavailable prediction modes in Table 1, as a intra prediction mode directivity applied to the current encoding unit is tan- 1(11)/84.80 (degrees), mode 0 (vertical) of Figure 14B having the most similar directivity with directivity 84.80 (degrees) can be mapped to mode 15. [000165] However, to decode a luminance component encoding unit encoded via intra prediction, prediction mode information is required to determine which intra prediction mode is used to encode a current luminance component encoding unit . Consequently, intra prediction mode information from the current luminance component encoding unit is added to a bit stream when encoding an image. At that time, extra code may increase, thereby decreasing compression efficiency if intra prediction mode information from each luminance component encoding unit is added to the bit stream. [000166] Therefore, according to an exemplary embodiment, instead of transmitting the current luminance component coding unit intra prediction mode information, which is determined as a result of coding the current luminance component coding unit , only a difference value between a current value of an intra prediction mode and a prediction value of an intra prediction mode, which is predicted from a neighboring luminance component encoding unit, is transmitted. [000167] Fig. 18 is a diagram for explaining a process of generating a prediction value of an intra prediction mode of a current luminance component encoding unit A 180, according to an exemplary embodiment. [000168] Referring to Figure 18 , an intra prediction mode of the current luminance component encoding unit A 180 can be predicted from the intra prediction modes determined in the neighboring luminance component encoding units. For example, when an intra prediction mode of a left component deluminance encoding unit B 181 is mode 3, and an intra prediction mode of an upper deluminance component encoding unit C 182 is mode 4, the mode of intra prediction of current luminance component encoding unit A 180 can be predicted to be mode 3 which has a lower value among the intra prediction modes of upper luminance component encoding unit C 182 and encoding unit left luminance component code B 181. If an intra prediction mode, determined as a result of effectively performing intra prediction coding in the current luminance component coding unit A 180 is mode 4, only 1, i.e. a difference value with mode 3 constituting intra prediction mode predicted from neighboring luminance component encoding units is transmitted as intra prediction mode information. A prediction value of an intra prediction mode of a current luminance component decoding unit is generated in the same way during decoding, and a difference value received via a bit stream is added to the prediction value, thereby getting intra prediction mode information effectively applied to the current luminance component decoding unit. In the above description, only the coding units, neighboring, upper and left, C and B 182 and 181 of the current luminance component coding unit A 180 are used, but alternatively, the intra prediction mode of the component coding unit current luminance A can be predicted using other neighboring luminance component encoding units E and D of Figure 18. An intra prediction mode of a luminance component encoding unit can be used to predict a prediction mode intra of a chrominance component encoding unit that will be described later. [000169] However, the way an intra prediction mode is effectively performed differs according to the sizes of the luminance component encoding units, an intra prediction mode predicted from neighboring luminance component encoding units may not match an intra prediction mode of a current luminance component encoding unit. Accordingly, to predict the intra prediction mode of the current luminance component encoding unit from neighboring luminance component encoding units having different sizes, a mapping process for mapping different intra prediction modes of the encoding units of luminance component, is required. [000170] Figures 19A and 19B are reference diagrams for explaining a process of mapping intra prediction modes between luminance component encoding units having different sizes, according to an exemplary embodiment. [000171] Referring to Figure 19A, the current luminance component encoding unit A190 has a size of 16x16, a luminance component encoding unit, left B191 has a size of 8x8, and a luminance component encoding unit upper C 192 has a size of 4x4. Furthermore, as described with reference to Figure 13 , numbers of intra prediction modes usable in luminance component encoding units having respectively sizes of 4x4, 8x8 and 16x16 are 9, 9 and 33 respectively. intra prediction, usable in left luminance component encoding unit B 191 and upper luminance component encoding unit C 192 are different from intra prediction modes usable in current luminance component encoding unit A 190, a mode of Intra prediction predicted from left and top luminance component encoding units B and C 191 and 192 may not be suitable for use as a prediction value of the intra prediction mode of current luminance component encoding unit A 190. Consequently, in the current exemplary embodiment, the intra prediction modes of the left and upper luminance component encoding units, B and C, 191 and 192, are changed respectively to first and second intra, representative prediction mode in the most similar direction among a predetermined number of intra, representative prediction mode, and one of the first and second intra, representative prediction mode, which has a minor mode value is selected as an intra, final representative prediction mode. Then, an intra prediction mode, having the direction most similar to the intra prediction mode, final representative is selected from among the intra prediction modes, usable in the current luminance component encoding unit A 190 as an intra prediction mode of the Current luminance component encoding unit A 190. [000172] Alternatively, referring to Figure 19B, assume that a current luminance component encoding unit A has a size of 16x16, a left luminance component encoding unit B has a size of 32x32, and a left luminance component encoding unit B has a size of C higher luminance component encoding has a size of 8x8. Furthermore, the numbers of available intra prediction modes of luminance component encoding units having the sizes of 8x8, 16x16 and 32x32 are assumed to be 9, 9 and 33 respectively. left luminance component encoding unit B is a mode 4, an upper luminance component encoding unit intra prediction mode C is a mode 31. left luminance component encoding B and upper luminance component encoding unit C are not compatible with each other, each of the intra prediction modes of left luminance component encoding unit B and luminance component encoding unit B upper C is mapped to one of the representative intra prediction modes shown in Figure 20. As mode 31 which is the intra prediction mode of the lumin component encoding unit leftance B has a directivity of (dx,dy)=(4,-3) as shown in Table 1, a mode 5 having a directivity more similar to tan-1(-3/4) from the intra prediction modes 20, is mapped, and since intra-prediction mode 4 of the upper luminance component encoding unit C has the same directivity as that of mode 4 among intra-representative prediction modes of Figure 20, mode 4 is mapped. [000173] Mode 4 having the lowest mode value from mode 5 which is the left luminance component encoding unit B intra-mapped prediction mode and mode 4 which is the left luminance component encoding unit intra-mapped prediction mode upper luminance component encoding C can be determined for a prediction value of an intra prediction mode of the current luminance component encoding unit, and only a mode difference value between an intra, current prediction mode and a mode prediction mode of the current luminance component encoding unit can be encoded as prediction mode information of the current luminance component encoding unit. [000174] Figure 20 is a reference diagram for explaining a process of mapping an intra prediction mode of a neighboring luminance component encoding unit to one of the representative intra prediction modes. In Figure 20, a vertical mode 0, a horizontal mode 1, a CD mode 2, a left diagonal mode 3, a right diagonal mode 4, a right vertical mode 5, a horizontal-down mode 6, a vertical mode -left 7, and a horizontal-upward mode 8 are shown as intra, representative prediction modes. However, intra-representative prediction modes are not limited to them, and can be set to have different directionality. [000175] Referring to Figure 20, a predetermined number of intra, representative prediction modes are established, and the intra prediction mode of a neighboring luminance component encoding unit is mapped to an intra, representative prediction mode having the most similar direction. For example, when an intra prediction mode of an upper luminance component encoding unit has a directionality indicated by MODE_A 200, the intra prediction mode MODE_A 200 of the upper luminance component encoding unit is mapped to mode 1 having the most similar direction among the predetermined representative intra prediction modes 1 to 9. Similarly, when an intra prediction mode of a left luminance component encoding unit has a directionality indicated by MODE_B 201, the intra prediction mode MODE_B 201 of the left luminance component encoding unit is mapped to mode 5 having the most similar direction among the intra, representative, predetermined prediction modes 1 to 9. [000176] Then, one of the first and second intra prediction modes, representative having a minor mode value is selected as an intra prediction mode, representative of a final neighbor luminance component encoding unit. An intra, representative prediction mode having a minor mode value is chosen since a minor mode value is generally set for the most frequently occurring intra prediction modes. In other words, when different intra prediction modes are predicted based on neighboring luminance component encoding units, an intra prediction mode having a lower mode value is more likely to occur. Accordingly, when different intra prediction modes are competing with each other, an intra prediction mode having a smaller mode value can be selected as a predictor or an intra prediction mode of the current luminance component encoding unit. [000177] Even when an intra, representative prediction mode is selected based on neighboring luminance component encoding units, the selected intra, representative prediction mode may not be used as a predictor of an intra prediction mode of a current luminance component encoding unit. If the current luminance component encoding unit A 190 has 33 intra prediction modes, and a representative number of intra prediction modes is 9 as described with reference to Fig. current luminance component A 190, which corresponds to intra, representative prediction mode, does not exist. In this case, as mapping an intra prediction mode of a neighboring luminance component encoding unit to an intra prediction mode, representative as depicted above, an intra prediction mode having the most similar direction as an intra prediction mode, representative among the intra prediction modes according to a current luminance component coding unit size can be finally selected as a predictor of the current luminance component coding unit intra prediction mode. For example, when an intra, representative prediction mode finally selected based on the neighboring luminance component encoding units of Figure 20 is mode 1, an intra prediction mode having the directionality most similar to the directionality of mode 1 is selected. among the intra prediction modes, usable according to the size of the current luminance component coding unit as a predictor of the intra prediction mode of the current luminance component coding unit. [000178] However, as described with reference to Figures 16A to 16C, if a predictor for the current pixel P is generated using neighboring pixels at or near the extended line 160, the extended line 160 effectively has a directivity of tan-1 (dy/dx). To calculate directivity, since division (dy/dx) is required, a calculation is done to decimal places when hardware or software is used, thereby increasing the amount of calculation. Consequently, a process of establishing dx and dy is used to reduce the amount of computation when the prediction direction for selecting neighboring pixels to be used as reference pixels with respect to a pixel in an encoding unit is established using parameters dx and dy in a manner similar to that described with reference to Table 1. [000179] Figure 25 is a diagram to explain a relationship between a current pixel and neighboring pixels located on an extended line that has a directivity of (dy/dx), according to an exemplary embodiment. [000180] With reference to Figure 25, it is assumed that a location of the current pixel P is P(j,i) and an upper neighbor pixel and a left neighbor pixel B located on an extended line 2510 having a directivity, that is, a gradient, from tan- 1(dy/dx) and passing through the current pixel P are respectively A and B. When the locations of upper neighboring pixels are assumed to correspond to an X-axis in a coordinate plane, and pixel locations left neighbors correspond to a y-axis in the coordinate plane, the top neighbor pixel A is located at (j+i*dx/dy,0), and the left neighbor pixel B is located at (0,i+j*dy /dx). Therefore, to determine either of the upper neighbor pixel A and the left neighbor pixel B to predict the current pixel P, division, such as dx/dy or dy/dx, is required. Such a division is very complex as described above, thus reducing a software or hardware calculation speed. [000181] Consequently, a value of either of dx and dy, representing a directivity of a prediction mode for determining neighboring pixels, can be determined to be a power of 2. That is, when N and M are integers, dx and dy can be 2An and 2Am, respectively. [000182] Referring to Figure 25, if the left neighbor pixel B is used as a predictor for the current pixel P and dx has a value of 2An, j*dy/dx needed to determine (0,i+j*dy/ dx) which is a location of the left neighbor pixel B becomes (j*dy/(2An)), and division using such a power of 2 is easily obtained through the shift operation as (j*dy)>>n, thus reducing the amount of calculation. [000183] Similarly, if the upper neighbor pixel A is used as a predictor for the neighbor pixel P and dy has a value of 2Am,i*dx/dy needed to determine (j+i*dx/dy,0) which is a location of the upper neighbor pixel A becomes (i*dx)/(2Am), and division using such a power of 2 is easily obtained through the shift operation as (i*dx)>>m. [000184] Figure 26 is a diagram to explain a change in a neighboring pixel located on an extended line that has a directivity of (dx,dy) according to a location of a current pixel, according to an exemplary embodiment. [000185] As a neighbor pixel required for prediction according to a location of a current pixel, either of an upper neighbor pixel and a left neighbor pixel is selected. [000186] Referring to Figure 26, when a current pixel 2610 is P(j,i) and is predicted using a neighboring pixel located in a prediction direction, an upper pixel A is used to predict the current pixel P 2610 When the current pixel 2610 is Q(b,a), a left pixel B is used to predict the current pixel Q 2620. [000187] If only a dy component of a y-axis direction from (dx,dy) representing a prediction direction has a power of 2 like 2Am, while the upper pixel A in Figure 26 can be determined by operation of shift without division such as (j+(i*dx)>>m,0), the left pixel B requires division such as (0,a+b*2Am/dx). Consequently, to exclude division when a predictor is generated for all pixels in a current block, all of dx and dy can have a power type of 2. [000188] Figures 27 and 28 are diagrams to explain a method of determining an intra prediction mode direction, according to exemplary embodiments. [000189] In general, there are many cases where linear patterns shown in an image or video signal are vertical or horizontal. Consequently, when intraprediction modes having different directivities are defined using parameters dx and dy, image coding efficiency can be improved by setting dx and dy values as follows. [000190] In detail, if dy has a fixed value of 2Am, an absolute value of dx can be established so that a distance between prediction directions close to a vertical direction is narrow, and a distance between prediction modes more closer to a horizontal direction are wider. For example, with reference to Figure 27, if dy has a value of 2A4, i.e. 16, a value of dx can be set to be 1,2,3,4,6,9,12, 16.0 ,-1,-2,-3,-4,6,-9,-12, and -16 so that a distance between prediction directions next to a vertical direction is narrow and a distance between the closest prediction modes to a horizontal direction is wider. [000191] Similarly, if dx has a fixed value of 2An, an absolute value of dy can be established so that a distance between prediction directions closest to a horizontal direction is narrow and a distance between prediction modes closest to a vertical direction is wider. For example, with reference to Figure 28, if dx has a value of 2A4, i.e. 16, a value of dy can be set to be 1,2,3,4,6,9,12,16,0 ,-1,-2,-3,-4,-6,-9,-12, and -16 so that a distance between prediction directions close to a horizontal direction is narrow and a distance between prediction modes closer closer to a vertical direction is wider. [000192] Also, when one of the values of dx and dy is fixed, the remaining value can be set to be increased according to a prediction mode. For example, if dy is fixed, a distance between dx can be set to be increased by a predetermined amount. Furthermore, an angle of a horizontal direction and a vertical direction can be divided into predetermined units, such an increased amount can be set at each of the divided angles. For example, if dy is fixed, a value of dx can be set to have an increased amount of a in a section less than 15 degrees, an increased amount of b in a section between 15 degrees and 30 degrees, and an increased width of c in a section greater than 30 degrees; In this case, to have such a format as shown in Figure 25, the value of dx can be set to satisfy a relationship of a<b<c. [000193] For example, the prediction modes described with reference to Figures 25 to 28 can be defined as a prediction mode having a directivity of tan-1(dy,dx) upon use (dx,dy) as shown in Tables 2 to 4. [000194] For example, with reference to Table 2, a prediction mode having a directionality of tan-1(dy,dx) using (dx,dy) represented as one of (-32, 32), (-26, 32), (-21.32), (-17, 32), (-13, 32), (-9, 32), (-5, 32), (-2, 32), (0.32), ( 2, 32), (5, 32), (9, 32), (13, 32), (17.32), (21.32), (26, 32), (32, 32), (32, -26), (32, -21), (32, -17), (32, -13), (32, -9), (32, -5), (32, -2), (32, 0 ), (32, 2), (32, 5), (32, 9), (32, 13), (32, 17), (32, 21), (32, 26) and (32, 32). [000195] Figure 21 is a diagram to explain intra-candidate prediction modes applied to a chrominance component encoding unit, according to an exemplary embodiment. [000196] Referring to Figure 21, the intra prediction modes, candidates applied while performing intra prediction of a chrominance component encoding unit include a vertical mode, a horizontal mode, a CD mode, a flat mode, and a prediction signal finally determined for a luminance component encoding unit corresponding to a current chrominance component encoding unit as described above. Furthermore, as described above, a luminance component encoding unit and a chrominance component encoding unit, which are subjected to intra prediction, can constitute one of the picture signals having 4:2:0 color formats. , 4:2:2 and 4:4:4 defined in a YCbCr (or YUV) color domain. An intra prediction mode having a minimum cost from among a plurality of usable intra prediction modes is selected as an intra prediction mode of the luminance component encoding unit, based on the cost calculation, such as an RD cost . The intra calculated prediction mode costs are individually calculated, and a candidate intra prediction mode having a minimum cost is selected as a final intra prediction mode of the chrominance component encoding unit. [000197] Figure 22 is a block diagram of an intra image prediction device 2200, according to an exemplary embodiment. The intra predictor 2200 according to the current embodiment of the present invention can operate as an intra predictor 410 of the image encoder 400 of Figure 4, and the intra predictor 550 of the image decoder 500 of Figure 5. [000198] Referring to Figure 22, the intra luminance predictor 2200 includes an intra luminance predictor 2210 and an intra chrominance predictor 2220. As described above, the intra luminance predictor 2210 selects the intra prediction modes, candidates to be applied according to a size of a current luminance component encoding unit, based on a size of each luminance component encoding unit divided according to a maximum encoding unit and a maximum depth, and apply the modes of intra prediction, determined candidates for current luminance component coding unit to perform intra prediction of current luminance component coding unit. The intra luminance predictor 2210 determines an optimal intra prediction mode having a minimal cost as a final intra prediction mode of the current luminance component encoding unit based on costs according to an error value between an encoding unit prediction generated by means of intra prediction, and an original luminance component encoding unit. [000199] The 2220 intra chrominance predictor calculates the costs according to a vertical mode, a horizontal mode, a vertical mode, a horizontal mode, a CD mode, a flat mode, and the intra, final prediction mode of the unit of luminance component encoding corresponding to the current chrominance component encoding unit, and determines an intra prediction mode having minimal cost as a final intra prediction mode of the current chrominance component encoding unit. [000200] However, when the intra 2200 prediction equipment of Figure 22 is applied to a decoding equipment, sizes of the current luminance and chrominance component decoding units are determined using a maximum encoding unit and a depth constituting hierarchical division information of the maximum encoding unit, which is extracted from a bit stream using entropy decoder 520 of Figure 5 , and an intra prediction mode to be performed is determined using information about a intra prediction mode information applied to current luminance and chrominance component decoding units. Furthermore, the intra prediction apparatus 2200 generates a prediction decoding unit by performing intra prediction on each of the luminance and chrominance component decoding units according to the extracted intra prediction mode. The prediction decoding unit is added to the residual data restored from a bit stream, and thus the current luminance and chrominance component decoding units are decoded. [000201] Figure 23 is a flowchart illustrating a method of determining an intra prediction mode of an encoding unit, according to an exemplary embodiment. [000202] Referring to Figure 23, a current image of a luminance component is divided into at least one luminance component encoding unit based on a maximum encoding unit and a depth constituting hierarchical division information of the encoding unit maximum, in operation 2310. [000203] In operation 2320, an intra prediction mode of the luminance component encoding unit is determined. As described above, the intra prediction mode of the luminance component encoding unit is determined by selecting intra prediction modes, candidates to be applied based on a size of the luminance component encoding unit, performing intra prediction on the unit. of luminance component coding by applying intra prediction modes, candidates in the luminance component coding unit, and then determining an optimal intra prediction mode having minimal cost as the coding unit's intra prediction mode of luminance component. [000204] In operation 2330, intra prediction modes, candidates of a chrominance component encoding unit, which include the intra prediction mode determined from the luminance component encoding unit, are determined. As described above, candidate intra prediction modes applied to the chrominance component encoding unit include, except for the determined intra prediction mode of the luminance component encoding unit, a vertical mode, a horizontal mode, a CD mode, and a flat mode. [000205] In operation 2340, the costs of the chrominance component encoding unit according to the determined candidate intra prediction modes are compared to determine an intra prediction mode that has a minimum cost. [000206] Fig. 24 is a flowchart illustrating a method of determining an intra prediction mode of a decoding unit, according to an exemplary embodiment. [000207] Referring to Figure 24, a maximum encoding unit and a depth constituting hierarchical division information of the maximum encoding unit are extracted from a bit stream, in operation 2410. [000208] In operation 2420, the current image to be decoded is divided into a luminance component decoding unit and a chrominance component encoding unit, based on the maximum encoding unit extracted and the depth. [000209] In operation 2430, information about the intra prediction modes applied to the luminance and chrominance component decoding units is extracted from the bit stream. [000210] In operation 2440, intra prediction is performed on the luminance component and chrominance decoding units according to the extracted intra prediction modes, thereby decoding the luminance component and chrominance decoding units. [000211] According to exemplary embodiments, by adding the intra prediction mode of the luminance component encoding unit having diverse directionality as the intra prediction mode of the chrominance component encoding unit, the prediction efficiency can be increased of an image of a chrominance component and also the prediction efficiency of an entire image without having to increase throughput. [000212] Exemplary modalities may be incorporated as computer programs and may be implemented on commonly used digital computers that execute the programs using a computer-readable recording medium. Examples of computer readable recording media include magnetic storage media (eg ROM, floppy disks, hard disks, etc.), optical recording media (eg CD-ROMs or DVDs), and storage media, [000213] Equipment of exemplary embodiments may include a bus coupled to each unit of the equipment or encoder, at least one processor that is connected to the bus, the processor to execute commands, and memory connected to the bus to store the commands, received messages, and generated messages. [000214] While this invention has been particularly shown and described with reference to exemplary embodiments, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the essence and scope of the invention as defined by attached claims. Exemplary modalities should be considered only in a descriptive sense and not for the purpose of limitation. Therefore, the scope of the invention is defined not by the detailed description of the invention, but by the appended claims, and all differences within the scope will be deemed to be included in the present invention.
权利要求:
Claims (1) [0001] 1. METHOD OF DECODING AN IMAGE, the method characterized in that it comprises: dividing a current image into a plurality of maximum encoding units, each maximum encoding unit having a maximum size according to information about a maximum size of a encoding unit, wherein a maximum encoding unit is divided into a plurality of encoding units using a depth information that indicates the number of times the encoding unit is spatially divided from the maximum encoding unit; dividing a unit of maximum luminance component encoding and a maximum chrominance component encoding unit of the current image into one or more luminance component encoding units and one or more chrominance component encoding units, respectively according to divided information; extract a first intra prediction mode information that indicates an intra prediction mode tion applied to a luminance component prediction unit included in a luminance component coding unit; extracting a second intra prediction mode information that indicates an intra prediction mode applied to a chrominance component prediction unit corresponding to the luminance component prediction unit; and perform intra prediction on the luminance component prediction unit based on the first intra prediction mode information from the luminance component prediction unit and perform intra prediction on the chrominance component prediction unit based on the second intra mode information prediction, wherein the intra prediction mode indicates a specific direction among a plurality of directions, the specific direction being indicated by a number dx in a horizontal direction and a fixed number in a vertical direction, and number dy in the vertical direction and a fixed number in the horizontal direction, where the number dx and number dy are one of value selected from {32, 26, 21, 17, 13, 9, 5, 2, 0, -2, -5, -9, - 13, -17, -21, -26} according to the specific direction, and wherein, when the second intra prediction mode information indicates that the intra prediction mode of the chrominance component prediction unit is equal to the chrominance component prediction mode intra prediction of compone prediction unit luminance component, the intra prediction mode of the chrominance component prediction unit is determined to be equal to the intra prediction mode of the luminance component prediction unit.
类似技术:
公开号 | 公开日 | 专利标题 BR112012025310B1|2022-02-15|Decoding an image method JP6646078B2|2020-02-14|Video encoding method, apparatus, and recording medium CA2822800C|2017-08-29|Method and device for encoding intra prediction mode for image prediction unit, and method and device for decoding intra prediction mode for image prediction unit BR122020016354B1|2022-02-15|METHOD TO DECODE AN IMAGE, AND APPARATUS TO DECODE AN IMAGE BR122021005439B1|2022-02-08|EQUIPMENT TO DECODE AN IMAGE BR122021005440B1|2022-02-08|EQUIPMENT TO DECODE AN IMAGE BR122020016355B1|2022-02-08|METHOD TO DECODE AN IMAGE, AND APPARATUS TO DECODE AN IMAGE AU2016203809C1|2018-01-25|Method and device for encoding intra prediction mode for image prediction unit, and method and devic AU2015200436B2|2015-10-08|Determining intra prediction mode of image coding unit and image decoding unit BR122015021744B1|2021-10-19|IMAGE DECODING METHOD BR122015021747B1|2021-10-19|IMAGE DECODING METHOD BR122015021745B1|2021-10-19|IMAGE DECODING APPARATUS
同族专利:
公开号 | 公开日 PL2903274T3|2017-11-30| LT2903273T|2017-11-10| EP2903273A1|2015-08-05| PT2903274T|2017-09-22| US8619858B2|2013-12-31| PT2905962T|2017-10-23| HUE029176T2|2017-02-28| RS54757B1|2016-10-31| EP2545713B1|2016-05-18| CN104768002A|2015-07-08| CN104780366B|2018-09-18| PL2905962T3|2017-12-29| US20180227583A1|2018-08-09| CY1119623T1|2018-04-04| JP2015180089A|2015-10-08| ES2572640T3|2016-06-01| SI2903275T1|2017-11-30| ZA201500719B|2016-01-27| EP2903273B1|2017-10-11| RU2608397C2|2017-01-18| PL3280140T3|2019-11-29| EP2905962A1|2015-08-12| RS56439B1|2018-01-31| EP3280140A1|2018-02-07| HUE036050T2|2018-06-28| CY1119347T1|2018-02-14| SMT201600140B|2016-07-01| CN102934446A|2013-02-13| JP2017063507A|2017-03-30| RU2014153849A|2016-07-20| KR101503269B1|2015-03-17| DK2545713T3|2016-05-30| JP6673976B2|2020-04-01| CN104780365A|2015-07-15| US20150117532A1|2015-04-30| ES2644036T3|2017-11-27| US20150124881A1|2015-05-07| WO2011126275A3|2012-01-26| CN104811701A|2015-07-29| ZA201500718B|2017-04-26| CA2884486A1|2011-10-13| RU2595950C2|2016-08-27| CN104780365B|2018-09-18| CN104780364B|2019-04-05| CY1117535T1|2017-04-26| CA2884205A1|2011-10-13| JP2015180088A|2015-10-08| DK2903273T3|2017-10-30| JP6343038B2|2018-06-13| ES2744279T3|2020-02-24| EP3280140B1|2019-08-14| HRP20171546T1|2017-11-17| CN104768002B|2017-09-22| US9667974B2|2017-05-30| RU2542884C2|2015-02-27| ZA201208296B|2015-06-24| SI2545713T1|2016-08-31| JP2015180087A|2015-10-08| CN104780364A|2015-07-15| ZA201500716B|2016-01-27| KR20110111854A|2011-10-12| PT2903275T|2017-10-23| EP2905962B1|2017-10-11| HUE035945T2|2018-06-28| EP2903275A1|2015-08-05| CN104811701B|2018-11-23| CN107426569A|2017-12-01| PL2545713T3|2016-10-31| RU2595949C2|2016-08-27| AU2011239130C1|2015-04-16| RU2595947C2|2016-08-27| RU2014153752A|2016-07-20| ZA201500717B|2016-02-24| US20170374370A1|2017-12-28| US20130329793A1|2013-12-12| CA2884205C|2017-12-12| RU2014153750A|2016-07-20| EP2903274B1|2017-09-13| RS56358B1|2017-12-29| RU2014153851A|2015-06-10| US10432948B2|2019-10-01| CN107426569B|2020-07-28| CA2795475C|2017-02-14| RU2012146757A|2014-06-20| US9967572B2|2018-05-08| US9794577B2|2017-10-17| DK2903275T3|2017-11-06| EP2903275B1|2017-10-11| HUE046784T2|2020-03-30| PL2903273T3|2018-01-31| MX2012011563A|2012-12-17| RS56437B1|2018-01-31| CA2884540A1|2011-10-13| RU2595946C2|2016-08-27| US20110243225A1|2011-10-06| AU2011239130A1|2012-11-01| JP2018157580A|2018-10-04| HUE035944T2|2018-06-28| HRP20171544T1|2017-12-01| RU2014153748A|2016-07-20| ES2644037T3|2017-11-27| PT2903273T|2017-10-24| CA2884537C|2017-08-22| HRP20171545T1|2017-11-17| HRP20171387T1|2017-11-03| MY164324A|2017-12-15| CA2884537A1|2011-10-13| EP2545713A4|2013-08-07| LT2905962T|2017-11-10| ZA201500715B|2016-01-27| US20170223364A1|2017-08-03| CN104767996A|2015-07-08| SI2903274T1|2017-10-30| CN102934446B|2017-09-12| CY1119617T1|2018-04-04| HUE036255T2|2018-06-28| CA2795475A1|2011-10-13| DK2905962T3|2017-10-30| SI2903273T1|2017-11-30| JP2015180090A|2015-10-08| WO2011126275A2|2011-10-13| CA2884540C|2017-03-14| PL2903275T3|2017-12-29| EP2545713A2|2013-01-16| AU2011239130B2|2014-11-06| LT2903275T|2017-11-10| ES2644039T3|2017-11-27| SI2905962T1|2017-11-30| US8964840B2|2015-02-24| CY1119621T1|2018-04-04| JP2013524675A|2013-06-17| HRP20160526T1|2016-06-17| CA2884486C|2017-03-14| ES2640629T3|2017-11-03| EP2903274A1|2015-08-05| DK2903274T3|2017-10-02| RS56438B1|2018-01-31| CN104767996B|2018-06-05| BR112012025310A2|2016-06-28| LT2903274T|2017-09-25| US20150117533A1|2015-04-30| JP5883846B2|2016-03-15| CN104780366A|2015-07-15|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US7830959B2|2003-12-26|2010-11-09|Electronics And Telecommunications Research Institute|Apparatus and method for performing intra prediction for image decoder| EP1558040A1|2004-01-21|2005-07-27|Thomson Licensing S.A.|Method and apparatus for generating/evaluating prediction information in picture signal encoding/decoding| US8082419B2|2004-03-30|2011-12-20|Intel Corporation|Residual addition for video software techniques| KR100813958B1|2004-06-07|2008-03-14|세종대학교산학협력단|Method of lossless encoding and decoding, and apparatus thereof| US7430238B2|2004-12-10|2008-09-30|Micronas Usa, Inc.|Shared pipeline architecture for motion vector prediction and residual decoding| US7970219B2|2004-12-30|2011-06-28|Samsung Electronics Co., Ltd.|Color image encoding and decoding method and apparatus using a correlation between chrominance components| US8290057B2|2005-02-18|2012-10-16|Mediatek Incorporation|Method of decoding a digital video sequence and related apparatus| KR100763178B1|2005-03-04|2007-10-04|삼성전자주식회사|Method for color space scalable video coding and decoding, and apparatus for the same| JP4050754B2|2005-03-23|2008-02-20|株式会社東芝|Video encoder and moving picture signal encoding method| KR101246915B1|2005-04-18|2013-03-25|삼성전자주식회사|Method and apparatus for encoding or decoding moving picture| EP1753242A2|2005-07-18|2007-02-14|Matsushita Electric Industrial Co., Ltd.|Switchable mode and prediction information coding| KR101349599B1|2005-09-26|2014-01-10|미쓰비시덴키 가부시키가이샤|Dynamic image decoding device| KR100727991B1|2005-10-01|2007-06-13|삼성전자주식회사|Method for intra predictive coding for image data and encoder thereof| KR100750145B1|2005-12-12|2007-08-21|삼성전자주식회사|Method and apparatus for intra prediction of image| KR20070077609A|2006-01-24|2007-07-27|삼성전자주식회사|Method and apparatus for deciding intra prediction mode| WO2007093629A1|2006-02-17|2007-08-23|Thomson Licensing|Process for coding images using intra prediction mode| KR101330630B1|2006-03-13|2013-11-22|삼성전자주식회사|Method and apparatus for encoding moving picture, method and apparatus for decoding moving picture, applying adaptively an optimal prediction mode| JP5026092B2|2007-01-12|2012-09-12|三菱電機株式会社|Moving picture decoding apparatus and moving picture decoding method| US20080170624A1|2007-01-12|2008-07-17|Mitsubishi Electric Corporation|Image encoding device and image encoding method| US8630346B2|2007-02-20|2014-01-14|Samsung Electronics Co., Ltd|System and method for introducing virtual zero motion vector candidates in areas of a video sequence involving overlays| US8995522B2|2007-04-13|2015-03-31|Apple Inc.|Method and system for rate control| US20080285652A1|2007-05-14|2008-11-20|Horizon Semiconductors Ltd.|Apparatus and methods for optimization of image and motion picture memory access| KR101362757B1|2007-06-11|2014-02-14|삼성전자주식회사|Method and apparatus for image encoding and decoding using inter color compensation| KR101291196B1|2008-01-25|2013-07-31|삼성전자주식회사|Video encoding method and apparatus, and video decoding method and apparatus| KR20090129926A|2008-06-13|2009-12-17|삼성전자주식회사|Method and apparatus for image encoding by dynamic unit grouping, and method and apparatus for image decoding by dynamic unit grouping| JP2010035137A|2008-07-01|2010-02-12|Sony Corp|Image processing device and method, and program| KR101517768B1|2008-07-02|2015-05-06|삼성전자주식회사|Method and apparatus for encoding video and method and apparatus for decoding video| US8275208B2|2008-07-02|2012-09-25|Samsung Electronics Co., Ltd.|Method and apparatus for encoding and decoding image using image separation based on bit location| US8948443B2|2008-08-19|2015-02-03|Thomson Licensing|Luminance evaluation| KR101452860B1|2009-08-17|2014-10-23|삼성전자주식회사|Method and apparatus for image encoding, and method and apparatus for image decoding| KR101503269B1|2010-04-05|2015-03-17|삼성전자주식회사|Method and apparatus for determining intra prediction mode of image coding unit, and method and apparatus for determining intra predion mode of image decoding unit| KR101753551B1|2011-06-20|2017-07-03|가부시키가이샤 제이브이씨 켄우드|Image encoding device, image encoding method and recording medium storing image encoding program| EP2833634A4|2012-03-30|2015-11-04|Sony Corp|Image processing device and method, and recording medium| CN108632611A|2012-06-29|2018-10-09|韩国电子通信研究院|Video encoding/decoding method, method for video coding and computer-readable medium| US9374592B2|2012-09-08|2016-06-21|Texas Instruments Incorporated|Mode estimation in pipelined architectures|KR101503269B1|2010-04-05|2015-03-17|삼성전자주식회사|Method and apparatus for determining intra prediction mode of image coding unit, and method and apparatus for determining intra predion mode of image decoding unit| KR101530284B1|2010-07-16|2015-06-19|삼성전자주식회사|Method and apparatus for video intra prediction encoding, and method and apparatus for video intra prediction decoding| CA2813191A1|2010-10-01|2012-04-05|Samsung Electronics Co., Ltd.|Image intra prediction method and apparatus| KR101789478B1|2011-03-06|2017-10-24|엘지전자 주식회사|Intra prediction method of chrominance block using luminance sample, and apparatus using same| WO2012147740A1|2011-04-25|2012-11-01|シャープ株式会社|Image encoding device, image encoding method, image encoding program, image decoding device, image decoding method, and image decoding program| KR101753551B1|2011-06-20|2017-07-03|가부시키가이샤 제이브이씨 켄우드|Image encoding device, image encoding method and recording medium storing image encoding program| KR20120140181A|2011-06-20|2012-12-28|한국전자통신연구원|Method and apparatus for encoding and decoding using filtering for prediction block boundary| CA3073053C|2011-06-24|2021-11-16|Mitsubishi Electric Corporation|Intra prediction of a processing block using a predicted value which is proportional to the amount of change in the horizontal direction of the signal value of a pixel adjacent to the left of the processing block| CN103765908B|2011-07-02|2017-11-03|三星电子株式会社|For being multiplexed and being demultiplexed to video data with the method and apparatus for the playback mode for recognizing video data| WO2013023518A1|2011-08-17|2013-02-21|Mediatek Singapore Pte. Ltd.|Method and apparatus for intra prediction using non-square blocks| US9787982B2|2011-09-12|2017-10-10|Qualcomm Incorporated|Non-square transform units and prediction units in video coding| CN107181962B|2011-10-07|2020-03-27|英迪股份有限公司|Method for decoding intra prediction mode of current block| US9699457B2|2011-10-11|2017-07-04|Qualcomm Incorporated|Most probable transform for intra prediction coding| JP6034010B2|2011-10-24|2016-11-30|ソニー株式会社|Encoding apparatus, encoding method, and program| US9247254B2|2011-10-27|2016-01-26|Qualcomm Incorporated|Non-square transforms in intra-prediction video coding| US9247257B1|2011-11-30|2016-01-26|Google Inc.|Segmentation based entropy encoding and decoding| CN110830798A|2012-01-18|2020-02-21|韩国电子通信研究院|Video decoding device, video encoding device, and computer-readable recording medium| US9210438B2|2012-01-20|2015-12-08|Sony Corporation|Logical intra mode naming in HEVC video coding| US11039138B1|2012-03-08|2021-06-15|Google Llc|Adaptive coding of prediction modes using probability distributions| US20130251028A1|2012-03-22|2013-09-26|The Hong Kong University Of Science And Technology|Video encoding and decoding with channel prediction and error correction capability| WO2013152736A1|2012-04-12|2013-10-17|Mediatek Singapore Pte. Ltd.|Method and apparatus for block partition of chroma subsampling formats| US9912944B2|2012-04-16|2018-03-06|Qualcomm Incorporated|Simplified non-square quadtree transforms for video coding| GB2501535A|2012-04-26|2013-10-30|Sony Corp|Chrominance Processing in High Efficiency Video Codecs| US9781447B1|2012-06-21|2017-10-03|Google Inc.|Correlation based inter-plane prediction encoding and decoding| US9426466B2|2012-06-22|2016-08-23|Qualcomm Incorporated|Transform skip mode| US9774856B1|2012-07-02|2017-09-26|Google Inc.|Adaptive stochastic entropy coding| US9167268B1|2012-08-09|2015-10-20|Google Inc.|Second-order orthogonal spatial intra prediction| US9344742B2|2012-08-10|2016-05-17|Google Inc.|Transform-domain intra prediction| US9380298B1|2012-08-10|2016-06-28|Google Inc.|Object-based intra-prediction| CN103634603B|2012-08-29|2017-07-07|中兴通讯股份有限公司|Video coding-decoding method, apparatus and system| KR102134367B1|2012-09-10|2020-07-15|선 페이턴트 트러스트|Image coding method, image decoding method, image coding device, image decoding device, and image coding/decoding device| US9369732B2|2012-10-08|2016-06-14|Google Inc.|Lossless intra-prediction video coding| JP2014082639A|2012-10-16|2014-05-08|Canon Inc|Image encoder and method of the same| JP6137817B2|2012-11-30|2017-05-31|キヤノン株式会社|Image coding apparatus, image coding method, and program| US9628790B1|2013-01-03|2017-04-18|Google Inc.|Adaptive composite intra prediction for image and video compression| CN103929650B|2013-01-10|2017-04-12|乐金电子研究开发中心有限公司|Depth coding unit coding method and decoding method, encoder and decoder| KR101475286B1|2013-01-18|2014-12-23|연세대학교 산학협력단|Method and apparatus for intra prediction, and apparatus for processing picture| KR101436949B1|2013-01-18|2014-09-03|연세대학교 산학협력단|Method and apparatus for encoding picture, and apparatus for processing picture| US9509998B1|2013-04-04|2016-11-29|Google Inc.|Conditional predictive multi-symbol run-length coding| KR101749855B1|2013-04-05|2017-06-21|미쓰비시덴키 가부시키가이샤|Color image encoding apparatus, color image decoding apparatus, color image encoding method, and color image decoding method| US10003792B2|2013-05-27|2018-06-19|Microsoft Technology Licensing, Llc|Video encoder for images| US20150016516A1|2013-07-15|2015-01-15|Samsung Electronics Co., Ltd.|Method for intra prediction improvements for oblique modes in video coding| US9392288B2|2013-10-17|2016-07-12|Google Inc.|Video coding using scatter-based scan tables| US9179151B2|2013-10-18|2015-11-03|Google Inc.|Spatial proximity context entropy coding| KR101519557B1|2013-12-27|2015-05-13|연세대학교 산학협력단|Apparatus and method for fast Intra Prediction Algorithm| WO2015100731A1|2014-01-03|2015-07-09|Mediatek Singapore Pte. Ltd.|Methods for determining the prediction partitions| EP3120556B1|2014-03-17|2021-01-13|Microsoft Technology Licensing, LLC|Encoder-side decisions for screen content encoding| JP6330507B2|2014-06-19|2018-05-30|ソニー株式会社|Image processing apparatus and image processing method| WO2015200822A1|2014-06-26|2015-12-30|Huawei Technologies Co., Ltd|Method and device for reducing a computational load in high efficiency video coding| CN105812795B|2014-12-31|2019-02-12|浙江大华技术股份有限公司|A kind of determination method and apparatus of the coding mode of maximum coding unit| WO2016123792A1|2015-02-06|2016-08-11|Microsoft Technology Licensing, Llc|Skipping evaluation stages during media encoding| CN104853192B|2015-05-08|2018-02-13|腾讯科技(深圳)有限公司|Predicting mode selecting method and device| US10038917B2|2015-06-12|2018-07-31|Microsoft Technology Licensing, Llc|Search strategies for intra-picture prediction modes| US10009620B2|2015-06-22|2018-06-26|Cisco Technology, Inc.|Combined coding of split information and other block-level parameters for video coding/decoding| US10003807B2|2015-06-22|2018-06-19|Cisco Technology, Inc.|Block-based video coding using a mixture of square and rectangular blocks| US10136132B2|2015-07-21|2018-11-20|Microsoft Technology Licensing, Llc|Adaptive skip or zero block detection combined with transform size decision| CN107306353B|2016-04-19|2020-05-01|广州市动景计算机科技有限公司|Image space prediction mode selection method and device, and image compression method and device| EP3306922A1|2016-10-05|2018-04-11|Thomson Licensing|Method and apparatus for encoding a picture using rate-distortion based block splitting| EP3316578A1|2016-10-25|2018-05-02|Thomson Licensing|Method and apparatus for encoding and decoding a picture| WO2018124686A1|2016-12-26|2018-07-05|에스케이텔레콤 주식회사|Image encoding and decoding using intra prediction| WO2018131838A1|2017-01-11|2018-07-19|엘지전자 주식회사|Method and device for image decoding according to intra-prediction in image coding system| CN107071417B|2017-04-10|2019-07-02|电子科技大学|A kind of intra-frame prediction method for Video coding| CN108737819B|2018-05-20|2021-06-11|北京工业大学|Flexible coding unit partitioning method based on quadtree binary tree structure| CN113170126A|2018-11-08|2021-07-23|Oppo广东移动通信有限公司|Video signal encoding/decoding method and apparatus for the same| CN112889286A|2018-12-21|2021-06-01|华为技术有限公司|Method and apparatus for mode-dependent and size-dependent block-level limiting of position-dependent prediction combinations| WO2020135206A1|2018-12-29|2020-07-02|Zhejiang Dahua Technology Co., Ltd.|Systems and methods for intra prediction| CN110213586B|2019-06-10|2021-03-05|杭州电子科技大学|VVC intra-frame prediction angle mode rapid selection method|
法律状态:
2018-03-27| B15K| Others concerning applications: alteration of classification|Ipc: H04N 19/159 (2014.01), H04N 19/105 (2014.01), H04N | 2018-12-26| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]| 2020-05-19| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]| 2021-11-30| B09A| Decision: intention to grant [chapter 9.1 patent gazette]| 2022-02-15| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 05/04/2011, OBSERVADAS AS CONDICOES LEGAIS. PATENTE CONCEDIDA CONFORME ADI 5.529/DF, QUE DETERMINA A ALTERACAO DO PRAZO DE CONCESSAO. |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 KR10-2010-0031145|2010-04-05| KR1020100031145A|KR101503269B1|2010-04-05|2010-04-05|Method and apparatus for determining intra prediction mode of image coding unit, and method and apparatus for determining intra predion mode of image decoding unit| PCT/KR2011/002375|WO2011126275A2|2010-04-05|2011-04-05|Determining intra prediction mode of image coding unit and image decoding unit| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|