![]() video encoding and decoding equipment and method
专利摘要:
Video coding and decoding equipment and methods according to the present invention, an image decoding mechanism is provided. The image decoding mechanism includes: a prediction block generation unit that generates a prediction block by performing intra prediction on a current block; a filter unit that generates a final prediction block by filtering on a filtering target pixel in the prediction block based on a current block intra prediction mode; and a reconstructed block generation unit that generates a reconstructed block based on the final prediction block and a reconstructed residual block corresponding to the current block, characterized in that the filtering target pixel is a prediction pixel included in a target region. filter type in the prediction block, and a filter type applied to the filtering target pixel and the filtering target region are determined based on the current prediction mode of the current block. 公开号:BR112013021229A2 申请号:R112013021229 申请日:2012-06-20 公开日:2019-08-13 发明作者:Yong Kim Hui;Ho Lee Jin;Soo Choi Jin;Woong Kim Jin;Chang Lim Sung 申请人:Electronics & Telecommunications Res Inst; IPC主号:
专利说明:
"Video Encoding and Decoding Equipment and Methods" Report (Descriptive Technical Field The present invention relates to image processing and, more particularly, to an intra prediction equipment and method. Background Technique Recently, according to the expansion of fusion radiodi10 services having high definition resolution (HD | in the country and in the world, many users are accustomed to a high resolution and definition image, such that many organizations have been trying to develop video devices addition, as interest in HDTV and ultra high definition (UH D) having a resolution four times 15 times higher than that of HDTV has increased, a compression technology for a higher resolution and higher definition image has been required ... For image compression, a prediction technology for predicted pixel values included in a current image of an image before and / or after the current image, an intra prediction technology for predicting pixel values included in a current image using pixel information in the current image, an entropy coding technology for allocating a short code to symbols having a high frequency of appearance and a long code for symbols having a low frequency of appearance, or the like, can be used. Revelation Technical problem The present invention provides an image encoding mechanism and method, capable of improving image encoding / decoding efficiency. The present invention also provides an image decoding mechanism and method capable of improving the image encoding / decoding efficiency. The present invention also provides a prediction block generation mechanism and method capable of improving the coding / coding efficiency of an age. The invention presence also provides a mechanism and method of 1.5 intra prediction able to improve the encoding / decoding efficiency of the image. The present invention also provides a filtering mechanism and method capable of improving image coding / decoding efficiency. Technical Solution In one aspect, an image decoding method is provided. e) image decoding method includes: generating a block of the 3/104 prediction performed intra prediction in a current block; generate a final prediction block by performing filtration on a target filtration pixel in the prediction block based on an intra-prediction mode of the current block; and generating a reconstructed block based on the final prediction block and a reconstructed residual block corresponding to the current block, characterized by the fact that the target filtration pixel is a prediction pixel included in a target filtration region in the prediction block and a type of filter applied to the target filtration pixel and the target filtration region are determined based on the current block's intra 10 prediction mode. In the case where the current block intra prediction mode is a DC mode, the target filtration region may include a left vertical prediction pixel line which is a vertical pixel line positioned in the leftmost part of the block. prediction is a 15 pixel upper horizontal prediction line which is a horizontal pixel line positioned in the upper pan on the prediction block. In the generation of the final prediction block, filtration can be performed in the case in which the current block is a luminance component block and may not be performed in the case in which the current block is a chrominaneous component block. The filter type can include information in a filter form, an filter tap, and a plurality of filter coefficients, and in the generation of the final prediction block, the filtration can be performed based on an independent predetermined fixed filter type of a current block size. /: 104 In the case where the target filtration pixel is an upper left prediction pixel positioned in the upper left pan on the prediction block, in the generation of the final prediction block, filtration on the target filtration pixel can be performed by applying a Filter 3 taps with 5 base on the target filtration pixel, a reference pixel above, adjacent to an upper part of the target filtration pixel, and a left reference pixel adjacent to the left of the target filtration pixel, the reference pixel above and the reference pixel on the left can be reconstructed reference pixels adjacent to the current block, respectively, io and on the 3 tap filter, a filter coefficient allocated to a filter tap corresponding to the target filtration pixel can be 2/4, a filter coefficient allocated to a filter tap corresponding to the reference pixel above may be 1/4. and a coefficient of the hlt.ro allocated to a filter tap corresponding to the reference pixel on the left can be 1/4. In the case where the target filtration pixel is a prediction pixel included in the left vertical prediction pixel line and the target filtration pixel is not an upper left prediction pixel positioned in the upper left part of the prediction block, in the generation of the final prediction block, a. filtration in the target filtration pixel can be performed by applying a horizontal 2-tap filter based on the target filtration pixel and a reference pixel a. left adjacent to the left of the filtration target pixel, the reference pixel. left can be a reconstructed reference pixel adjacent to the current block, and in the filter 25 dc 2 horizontal taps, a filter coefficient allocated to one. filter tap corresponding to the target filtration pixel can be 3/4 and a filter coefficient allocated to one. tap, the filter corresponding to the 5/104 left pixel reference can be 1/4. In the case where the target filtration pixel is a prediction pixel included in the line, the top horizontal prediction pixel and the target filtration pixel is not an upper left prediction pixel positioned 5 in the upper left part of the prediction block, in the generation of the final prediction block, surfacing the target pixel in the illusion can be performed by applying a vertical 2 tap filter based on the target filtration pixel and a reference pixel above adjacent to. an upper part of the target filtration pixel, the reference pixel above can be a reconstructed reference pixel 10 adjacent to the current block, and in the vertical 2 tap filter, a filter coefficient allocated to a filter tap corresponding to the target pixel of filtration can be 3/4 and a filter coefficient allocated to a filter tap corresponding to the reference pixel above can be 1/4, In another aspect, an image decoding method is provided. The method of image decoding includes: generating a prediction block by performing the prediction on a target pixel in the prediction on a current block based on an intra prediction mode of the current block; u generate 'a reconstructed block based on the prediction block 20 and a reconstructed residual block corresponding to the current block, characterized by the fact that in the generation of the prediction block, the prediction in the target prediction pixel is carried out based on a first deviation in ea.su in which way to laugh int.ro. prediction of the current block is a vertical mute with the target pixel prediction is a pixel on a line of 25 pixels vertical to the left, and the prediction on the target pixel prediction is performed based on a second deviation in the case where the dc mode intra 6 / W4 prediction of the current block is a horizontal mode and the target pixel of the prediction is a pixel in an upper horizontal pixel Ho ha, the vertical pixel line on the left being a vertical pixel line positioned at the most right. left in the current block and a. top horizontal pixel line being a horizontal pixel line positioned at the top of the current block. In the generation of the prediction block, a prediction value of the prediction target pixel can be derived by adding a value of the first deviation to a pixel value of a first reference pixel present 10 on the same vertical line as a vertical line in which the target prediction pixel is present between the reconstructed reference pixels adjacent to an upper part of the current block in the case where the current block intra prediction mode is the vertical mode and the target prediction pixel is the pixel on the line vertical pixel on the left, characterized by the fact that the value of the first deviation is determined based on a difference value between a pixel value of a second reference pixel adjacent to the left of the prediction target pixel and a pixel value of a third reference pixel added to the left of the first reference pixel. In generating the prediction block, it can be determined that the pixel value of the first reference pixel is the prediction value of the prediction target pixel no. case, in which the current block is a chrominance component block. In the generation of the prediction block, a prediction value of the prediction target pixel 25 can be derived by adding a value of the second deviation to a pixel value of a first reference pixel present on the same horizontal line as a horizontal line on which the target prediction pixel is present between the reconstructed reference pixels adjacent to the left of the current block in the case where the current block intra prediction mode is the horizontal mode and the target pixel of the prediction is the pixel on the horizontal pixel line higher, characterized by the fact that the value of the second deviation <. · determined based on a difference value between a pixel value of a second reference pixel adjacent to an upper part of the prediction target pixel and a pixel value of a third reference pixel adjacent to an upper part of the first reference pixel. In the generation of the prediction block, it can be determined that the pixel value of the first reference pixel is the prediction value of the target prediction pixel in the case where the current block is a chrominance component block. In yet another aspect, an image decoding mechanism is provided. The image decoding mechanism includes: a prediction block generation unit generating a prediction block by performing a single prediction in a current block; a filter unit with a final prediction block performing filtration on a target filtration pixel in the prediction block based on an intrinsic prediction mode of the current block; c a reconstructed block generation unit generating a reconstructed block based on the final prediction block and a reconstructed residual block corresponding to the current block, characterized by the fact that the 25 ültration target pixel is a prediction pixel included in a target region of filtration in the prediction block, and a type of filter applied to the target filtration pixel and the target region of filtration are determined based on the current block's intra prediction mode In the case where the current block prediction mode is a DC mode, the target region of filtration may include a vertical prediction pixel line on the left which is a vertical pixel line positioned on the left most part of the block. prediction and one. top horizontal prediction pixel line which is a horizontal pixel line positioned at the top of the prediction block. In the case where the target filtration pixel is a prediction 10 top tie pixel positioned on the left, upper in the prediction block, the filter unit can perform a filtration on the target filtration pixel by applying a 3-tap filter with ba.se rfc filtration target pixel, a reference pixel above, adjacent to an upper part of the filtration target pixel, and a left reference pixel adjacent to the left of the filtration target pixel, the above reference pixel and the pixel left reference pixels can be reconstructed reference pixels adjacent to the current block, respectively, and in the 3 tap filter, a filter coefficient allocated to a tap, the filter corresponding to the target filtration pixel can be 2/4, a 20 coefficient of the filter allocated to a filter tap corresponding to the reference pixel, above can be 1/4, and a filter coefficient allocated to a filter tap corresponding to the pi left reference frame can be 1/4. In the case where the target filtration pixel is a prediction pixel 25 included in the line, the vertical prediction pixel on the left and the target filtration pixel is not an upper left prediction pixel positioned / 104 on the upper left in the block. prediction, a filter unit could perform a filtration on the target pixel of filtration by applying a horizontal 2-tap filter based on the target pixel of filtration and a reference pixel on the left adjacent to the left of the target pixel of 5 filtration, the pixel the left reference point can be a reconstructed reference pixel adjacent to the current block, the horizontal 2-tap filter, a filter coefficient allocated to a filter tap corresponding to the target filtration pixel can be 3/4 c with a filter coefficient allocated to a filter tap corresponding to the pixel of íThe reference on the left can be 1/4, In the case where the target filtering pixel is a prediction pixel induced in the upper horizontal prediction pixel line and the target filtering pixel is not an upper left prediction pixel positioned in the upper left part of the prediction block, the unit of filter 15 can perform a filtration on the target filtration pixel by applying a vertical 2-tap filter based on the target filtration pixel and a reference pixel above adjacent to an upper part of the filtration target pixel, the reference pixel above be a reconstructed reference pixel adjacent to the current block, c in the vertical dc filter 2 taps, a filter coefficient allocated to a filter tap corresponding to the target filtration pixel can be 3/4 and a filter coefficient allocated to a tap of the filter corresponding to the reference pixel the height can be 1/4. In yet another aspect, an image enhancement mechanism is provided. The image decoding mechanism includes: a block generation unit generating a prediction block by predicting a target pixel. prediction in a current block based on an intra prediction mode of the current block; and a reconstructed block generation unit generating a reconstructed block based on the prediction block and a reconstructed residual block corresponding to the current block, characterized by the fact that the prediction block generation unit performs the prediction on the predicted target pixel. based on a first deviation in the event that the current block's intra prediction mode is a vertical mode and the target prediction pixel is one pixel in a row, from vertical pixel to the left and 0 performs a. prediction in target pixel based prediction: errs one second deviates in the case where du intra mode, current block prediction is a horizontal mode and the target prediction pixel is a pixel in an upper horizontal pixel line, the vertical pixel on the left being a vertical pixel line positioned on the left most part, in the current block 5 and the upper horizontal pixel line being a horizontal pixel line positioned on the highest part in the current block. The prediction block generation unit can be derived from a prediction value of the prediction target pixel by adding a value of the first deviation to a pixel value of a first reference pixel 0 present in the same vertical line as a vertical line in the which the target pixel of the prediction is present between the reconstructed reference pixels adjacent to an upper part of the current block in the case where the intra prediction mode of the current block is the vertical mode and the target pixel of prediction is the pixel on the line left vortical pixel, ii characterized by the fact that the value of the first deviation is determined based on a difference value between a pixel value of a second reference pixel adjacent to the left of the target pixel of 11/104 prediction and a pixel value of a third reference pixel adjacent to the left of the first reference pixel. The prediction block generation unit can be derived from a prediction value of the prediction target pixel by adding a value of the 5 second deviation to a pixel value of a first reference pixel present in the. same horizontal line corne- a horizontal line in which the target prediction pixel is present between the reconstructed reference pixels adjacent to the left of the current block in the case where the current block intraspection mode is the horizontal mode and pixel 10 prediction target is the pixel on the upper horizontal pixel line, characterized by the fact that the value of the second deviation is determined based on a difference value between a pixel value of a second reference pixel adjacent to an upper part of the pixel prediction target and a pixel value of a third reference pixel adjacent to an upper part of the first reference pixel. Advantageous Effects With the image encoding method according to the exemplary embodiment of the present invention, the image encoding / decoding efficiency can be improved. so With the image decoding method according to the exemplary embodiment of the present invention, the image encoding / decoding efficiency can be improved. With the prediction block generation method according to the exemplary embodiment of the present invention, the efficiency of image coding / decoding can be improved. With the intra prediction method according to the exemplary embodiment of the present invention, the image encoding / decoding efficiency can be improved. S With the method of carrying out the filtration according to the exemplary embodiment of the present invention, the efficiency of image encoding / decoding can be improved. Description of Drawings Figure 1 is a block diagram showing a configuration of an image encoding mechanism according to. an exemplary embodiment of the present invention. Figure 2 is a block diagram showing a configuration of an image decoding mechanism according to an exemplary embodiment of the present invention. iS Λ Figure 3 is a conceptual diagram that shows an example in which a single unit is divided into a plurality of s u bunid a d e s. Flanges 4A and 48 are diagrams that describe an example of an intra prediction process. Λ Figure 5 is a diagram that shows an example of an intra prediction method in a planar mode. Figure 6 is an flowchart showing a schematic diagram of a 13 / '· 04 example of an image encoding method according to the exemplary embodiment of the present invention. Figure 7 is a diagram showing schematically an example of a process for generating a residual block. Figure 8 is a flow chart showing schematically an example of an image decoding method according to the exemplary modality of. present invention. Figure 9 is a diagram that shows schematically an example of the residual block generation process. .10 Figure 10 is a flow chart showing schematically an example of a method of performing filtration according to the exemplary embodiment of the present invention. Figure 11 is a diagram showing schematically an example of a method for determining whether or not filtering is performed 15 based on the encoding parameters of neighboring blocks adjacent to a current block. Figure 12 is a diagram that shows schematically an example of a method to determine whether or not the fiit is performed based on the information whether or not the neighboring blocks adjacent to the current block are present (and / or whether or not the neighboring blocks are present) are an available block). Figure 13 is a diagram showing, schematically an example of a method of determining a region for 14/104 filtration based on an intra-prediction mode of the current block. Figure 14 is a diagram showing schematically an example of a method of determining the region for carrying out filtration based on a size and / or depth of the current block 5. Figure 15 is a diagram showing schematically an example of a method of determining the region of carrying out filtration with. based on a coding mode of neighboring blocks adjacent to the current block. Figures 16A and 16B are diagrams showing an example of a method of determining the type of filter according to the current block prediction mode. Figure 17 is a diagram showing schematically the method of determining the type of filter according to the example of; ü Figures I6A and 1.6B, Figure 18 is a diagram showing schematically an example of a type of filter applied in the case where a prediction mute of the current block is in a vertical mode and / or a horizontal mode Figure 19 is a diagram showing schematically another example of a type of filter according to the exemplary embodiment of the present invention. Figure 20 is a diagram describing an intra prediction mode and a type of filter applied to. Table. 9. 15/104 Mode for the Invention Hereinafter, the exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings. In describing exemplary embodiments of the present invention, well-known functions and constructions will not be described in detail, as they may unnecessarily obscure the understanding of the present invention. It will be understood that. when an element is simply referred to as being "connected to" or "coupled to s out.ro element without being to 'directly connected to' or 'directly coupled to' another element in this description, it can be 'directly connected to or '' directly coupled to 'another element or to be connected to or to be connected to or coupled to another element, with the other element intervening between links. Furthermore, in the present invention, “comprising a specific configuration it will be understood that the additional configuration could also be included in the modalities or scope of the idea. technique of this .invention. The terms used in the specification, ‘first’, ‘second’, etc., can be used to describe various components, but the components should not be considered as being limited to reading. The terms are only used to differentiate a component from other components. For example, the 'first' component may be called the 'second' component and the 'second' component may also be similarly called the 'first' component, without departing from the scope of the present invention. 1/16 (.) 4 In addition, the constitutional parts shown in the embodiments of the present invention are independently shown to represent different characteristic functions. Thus, this does not mean that each constitutional part is constituted in a separate constitutional unit 5 of hardware or software. In other words, each constitutional part includes each of the listed constitutional parts for convenience of explanation. Thus, at least two constitutional parts of each constitutional part can be combined to form a constitutional part or a constitutional part 10 can be divided into one. plurality of constitutional parts to carry out each function. The modality where each constitutional part is combined and a. modality where a constitutional part is divided are also included in the scope of the present invention, if not departing from the essence of the present invention. is In addition, some of the constituents may not be indispensable constituents performing the essential functions of the present invention, however, they will be selective constituents only improving the sera lio of the same. The present invention can be implemented by including the constitutional parts indispensable for the implementation 20 of the essence of the present invention except the constituents used to improve performance. The structure including only the indispensable constituents except the selective constituents used in improving performance only is also included in the scope of the present invention. 2S Figure. 1 is a block diagram showing a configuration of an image encoding mechanism according to a 17 /! C> 4 exempli.fi.active model of the present invention. Referring to Figure 1, the image encoding mechanism 100 includes a motion estimator 111, a motion compensator 11'2, an intra predictor 1.20, a switch 115, a subtral tor 125, a transformer 130, a quantifier 1.40 , an entropy encoder 150, a quantifier 160, an inverse transformer 170, an adder 175, a filter unit 180 and a reference image 190, The image encoding mechanism 100 can encode input images in an intra-mode or an inter-mode (bit lux and output. The bottom prediction means intra-image prediction and the inter-prediction means inter-prediction. In the case of intra mode, switch 115 can be switched to crosshairs, and in the case of inter mode, switch 115 can be switched to inter. Image coding mechanism 100 can generate a prediction block for a block input images and then encode a residual between the input block and the prediction block. In the case of intra-mode, the intra-predictor 120 can perform spatial prediction using pixel values of previously encoded blocks around a current block to generate the prediction block. In the case of the intermodal, the estimator of rnoviraent.o 111 can search for a region ideally shared with the input block in a reference image stored in the bq / 7 « Rde reference image 19Ü during a movement prediction process to obtain a motion vector, motion compensator 1.12 can perform the Ιδ / 104 motion compensation using the motion vector to generate the prediction block. Here, the motion vector can be a two-dimensional vector used for inter-prediction and represents a deviation between a current coding / decoding target image and the reference S image, Subtractor 125 can generate a residual block for the residue between the input block and the generated prediction block. Transformer 130 can perform the transformation in the residual block to produce transformation coefficients. In addition, the quantizer 140 can quantify the input transformation coefficient according to the quantization parameters to produce a quantized coefficient. The entropy encoder 150 can perform entropy by coding based on the values calculated in quantifier 140 or by encoding parameter values, or the like, calculated during the decoding process for bit and output streams. When entropy coding is applied, symbols are represented by allocating a small number of bits to symbols having a high probability of generation and by allocating a large number of bits to symbols having low probability of generation, 20 thereby making it possible to reduce a size of the bit streams for the target encoding symbols. Therefore, the compression performance of image decoding can be improved through entropy coding. <) entropy encoder 150 can use an encoding method such as golomb expr.men.cial. length-coding variable adaptive to context (CAVLC), context-adaptive binary arithmetic (CABAC), or similar, for w4 encoding between pda. Since- the image encoding mechanism according to the exemplificative modality. Figure 1 performs inter-prediction coding, that is, inter ·· -image prediction coding, ama 5 encoded image, current needs to be decoded and stored to be used as a reference image. Therefore, the quantified and de-quantified coefficient in the decantifier '160 and inversely transformed in the inverse transformer 170. The decantified and inversely transformed coefficient is added to the prediction block through the adder 175, such that a reconstructed block is generated. The rebuilt block passes through the filter unit 180 c at filter unit 1.80 can apply at least one out of one filter unlocking, one adaptive deviation to the sample (SAO), and a filter closed circuit adaptive (ALF) to a block reconstructed or a reconstn image .ilda. The filter unit 180 may also be called an adaptive mesh filter. The unlock filter can remove block distortion generated at an inter-block boundary. SAO asks to add an appropriate offset value to a pixel value to compensate for a coding error. ALF can perform filtration based on a comparison value between a. reconstructed image and the original image. The reconstructed block passing through the filter unit 180 can be stored in the Öaf / er of the reference imaeem 190. Figure 2 is a block diagram showing a configuration of an image decoding mechanism according to an 20/104 exemplary embodiment of the present invention. Referring to Figure 2, an image decoding mechanism 200 includes an entropy decoder 210, a decoder 220, an inverse transformer 230, an intra-predictor 240, a motion controller 250, an adder 255, a filter unit 260, and a reference image buffer 270. The image decoding mechanism 200 can output the bit streams. the decoder to perform decoding in the intra mode or in the inter mode and output the reconstructed image, that is, to: the reconstructed image, In the home of the in Ira mode, the requested comulator, to be switched to the intra, and in the case of the mode can be switched to the mode. The image decoding mechanism 200 can obtain a residual block from the received bit streams, generate the prediction block, and then add the residual block to the prediction block to generate the reconstructed block, that is. the rebuilt block. The entropy decoder 210 can entropy the input of bit streams by entropy according to the probability of distribution to generate symbols including a type of quantified symbol coefficient. The method of decoding the entropy pair is similar to the entropy coding method 20 mentioned above. When entropy decoding method is applied, symbols are represented by allocating a small number of bits to symbols having a high probability of generation and placing a large number of bits to symbols having low probability of 25 generation, thereby making possible to reduce a stream size 21 / W4 bits for each symbol. However, the performance of the image decoding compression can be improved through the d ecodication method by c n 1 ropi a. The quantized coefficients can be de-quantified at> de-quantifier 220 and be inversely transformed into the inverse transformer 230. The quantized coefficients are de-quantified / inversely transformed, such that the residual block can be generated. In the case of intra-mode, the inner predictor 240 can perform spatial prediction using pixel values of blocks encoded in advance around a current block to generate the prediction block. In the case of the intermodal, the motion compensator 250 can perform the motion compensation using the motion vector and the reference image stored in the reference image buffer 270 to generate the prediction block. The residual block and the prediction block can be added to each unit via the adder 255 and the added block can pass through the filter unit 260. The filter unit 260 can apply at least one of the unlocking filter, SAO , and the ALE to the reconstructed block or the reconstructed image. Filter unit 260 can output the reconstructed images, i.e., the reconstructed image. The reconstructed image can be stored in reference image 270 to thereby be used by inter prediction. Hereinafter, a unit means an image encoding and decoding unit. At the time of image encoding and decoding 25, the encoding and decoding unit means the unit 22/104 divided when the image is divided and then encoded or decoded. Therefore, the unit can be called one. coding unit (CU), a prediction unit (PUi, a transformation unit (TU), or the like. In addition, in the examples to be described below, the unit can also be called a block, a single unit can be subdivided into subunits having a smaller size. Figure 3 is a conceptual diagram ücamentc diagram showing an example in which a single unit is divided into a plurality of subunits. A single unit can be hierarchically divided using depth information based on a tree structure. The respective divided subunits can have depth information. Since the depth information indicates the number 15 and / or the degree of unit divisions, it can include information in a size of the sub-unit. Referring to 310 in Figure 3, an upper node can be called> a root node and has a lower depth value. Here, the upper node can have a level 0 depth and indicates an initial, i.e., undivided, unit 20. A minor node having a level I depth can indicate a unit divided once from the starting unit, and a minor node having a level 2 depth can indicate a unit divided twice from the starting unit. For example, at 320 25 of Figure 3, a unit a corresponding to a node a can be a .....'.............., the ......... ........ ó: :: óó ...... / ........ it.jiitr / r / ll / ilBlsI .......... zsi ...............: / / 4 ·· ........ The unit divided once from the unit .start and have a level 1 depth. A level 3 leaf node can indicate a unit divided three times from the starting unit. For example, in 320 of Figure 3, 5 a unit d corresponding to a node d could be a unit divided three times from the starting unit c has a level 3 depth. The level 3 leaf node, which is a node smaller, may have a p rofu nd Going further but pro fu nd a. Hereinafter, in the examples to be described below, an encoding / decoding target block 10 can also be called a current block in some cases. In addition, in the case where the intra prediction is performed in the target coding / decoding block, the target coding / decoding block could also be called a prediction target block, Gnt.reiartto. a video signal can generally include three color signals representing three primary color components of light. The three color signals representing the three primary color components of light can be a red signal (R), a green signal (G), and a blue signal (B). The signals R, G and B can be converted into a luminance signal 20 and two crema signals to reduce a frequency band used for image processing. Here, one. video signal can include a luminance signal and two chroma signals. Here, the luminance signal, which is a component indicating the brightness of a screen, can correspond to ¥. and the chroma signal, which is a component indicating the color of the screen, can correspond to U and V or Cb and Cr. Since the human visual system (HVS) is sensitive to the luminance signal 24 / W4 and insensitive to the chroma signal, in the event that the signals R, G, c B are converted into the luminance signal and the chroma signal using these characteristics, a frequency band used to process an image can be reduced. In the examples to be described below, 5 a block having the screen component called a luminance block, and a block reading the chroma component will be a chroma block. Figures 4A and 4B are diagrams that describe an example of an intra prediction process. 410 and 420 of Figure 4A show the 10 examples of prediction directions of an intra prediction mode and mode values allocated to each of the prediction directions. In addition, 430 of Figure 4B shows, the positions of the reference pixels used for intra prediction of a target coding / decoding block. A pixel can have the same meaning as that of a sample. In the 15 examples to be described below, the pixel can also be called a sample in some cases. As described in the examples of Figures 1 and 2, the coder α or dcodifier can perform an intra prediction based on pixel information in the. current image to generate the prediction block. This is. at the time of. accomplishment of the intra prediction, the encoder and the decoded they can perform directional and / or non-directional prediction based on at least one reconstructed reference pixel. Here, the prediction block can mean a block generated as a result of performing the intra prediction. The prediction block may correspond to at least one of a coding unit (CU), a prediction unit (PU), and a transformation unit (TU). In addition, the prediction block can be a square block reading a size of 2x2, 4x4, 8x8. 1.6x16, 32x32. 64x64, or similar, or be a rectangular block having a size of 2x8. 4x8, 2x16. 4x16, 8x16, or similar. However, the intra prediction can be performed according to an intra prediction mode of the current block. The number of intra prediction modes that the current block can have can be a fixed predetermined value or a value changed according to a prediction block size. For example, the number of modes that predict the current block may be 3, 5, 9, 17, 34, 36, 36, or the like. 410 of Figure 4A shows an example of the prediction directions of the intra prediction mode and the mute values allocated to each of the prediction directions. In 4W of Figure 4A, the numbers allocated to each of the prediction modes can indicate the values of the mode. Referring to 410 in Figure 4A, for example, in the case of a vertical mode having the mode value of 0, the prediction can be made in a vertical direction based on the pixel values of the reference pixels., And the In the case of a horizontal mode having the mode value of 1, the prediction can be performed in a horizontal direction based on the pixel values of the reference pixels. Also in the case of a directional mode 20 different from the modes mentioned above, the encoder and decoder can perform an intra prediction using reference pixels according to the corresponding angles. In 410 of Figure 4A, an intra prediction mode having a mode value of 2 can be called a DC mode, and an intra prediction mode having a mode value of 34 can be called a mode. 26/104 planar. DC mode and planar mode can correspond to a non-directional mode For example, in the case of DC mode, the prediction block can be generated by averaging pixel values from a plurality of reference pixels. An example of a method of generating each 5 pixel prediction of the prediction block in planar mode will be described below with reference to Figure 5. The number of intra prediction modes and / or the mode values allocated to each of the intra prediction modes are not limited to the example mentioned above, however, it can also be changed according to an implementation and / or as needed. For example, the prediction directions of the intra prediction modes and the mode values allocated to each of the prediction modes can be set to be different from 410 in Figure 4A, as shown in 420 in Figure 4À. Hereinafter, in the examples to be described below, unless particularly described, intra prediction is assumed to be performed in intra prediction mode as shown in 410 of Figure 4A for convenience. of explanation. In addition, hereinafter, a intra-prediction mode positioned to the right of the vertical mode is called one. vertical right mode, and an intra prediction mode positioned in a smaller part of the horizontal mode is called a horizontal down mode. For example, cm 410 of Figure 4A> an intra prediction mode having mode values of 5, 6, 12, 13, 22, 23, 24, and 25 may correspond to the vertical right mode 4 13, and an intra prediction mode having mode values of 8, 9, 16. 17, 25 30, 31, 32, and 33 can correspond to the horizontal mode below 416. However, referring to. 430 of Figure 4B, as reconstructed pixel pixels used for intra-prediction of the current block, for example, there may be reference pixels below the left 431., reference pixels left 433, reference pixels above the left corner 435, reference pixels above 437, reference pixels above ó right 439, and the like. Here, the reference pixels. the left 433 can mean reconstructed reference pixels adjacent to the left of an external part of the current block, the reference pixels above 437 can mean reconstructed reference pixels adjacent to an upper part of the external part of the current block, and the reference pixels 10 above the left corner 433 can mean reconstructed reference pixels adjacent to an upper left corner of the outside of the current block. In addition, the reference pixels below on the left 431, can mean the position of the reference pixels in a lower part of a left pixel line configured 5 from the left reference pixel 433 between the pixels positioned on the same line as the pixel line on the left, and the reference pixels above on the right 439 can mean the position of the reference pixels on the right of a line from the top pixel configured from the above reference pixels 437 between the pixels positioned on the same line as the line of higher pixel, In the present specification, the names of the reference pixels described above can be similarly applied to the other examples to be described below. The reference pixels used for the intra-prediction of the current block can be changed according to the intra-prediction mode '25 of the current block. For example, in the case where the current block's intra prediction mode is vertical mode (the intra prediction mode having the mode value of 0 in 41.0 of Figure 4A), the reference pixels above 437 can be used to the intra prediction, and in the case where the current block intra prediction mute is the horizontal mode (the intra. prediction mode having the mode value of 1. in 410 of Figure 4A), the reference pixel a left 433 can be used for intra prediction. In addition, in the case where an intra prediction mode having a mode value of 13 is used, the reference pixel above right 439 can be used for intra prediction, and in the case where an intra prediction mode having a mode value of 7 is used, the reference pixel below left 431 can be used for intra prediction. In the case where the positions of the reference pixels determined based on the prediction directions and the target pixels of the intra prediction mode are integer positions, the encoder and decoder can determine that the reference pixel values of the positions corresponding pixels are prediction value pixels of the prediction target pixels. In the case where the positions of the reference pixels determined based on the prediction directions and the prediction pixels of the intra prediction mode are not the integer positions, the encoder and decoder can set the interpellated reference pixels based on the reference pixels of the positions of the integer 20 and determine which pixel values of the interpolated reference pixels are the prediction value pixels. According to the example described above, the encoder and decoder can perform a. intrinsic prediction in the target coding / decoding block based on the reconstructed or 2S reference pixels generated. However, as described above, the reference pixels used for the intra prediction can be changed according to the 29/104 dc intra prediction mode of the current block and the discontinuity between the generated prediction block and neighboring blocks can be generated. For example, in the case of intra-directional prediction, the greater the distance from the reference pixel, the greater the prediction errors of the prediction pixels in the prediction block. In this case, discoloration can be generated due to the prediction error and there may be a limitation in improving the coding efficiency, Therefore, to solve the problem mentioned above, an encoding / decoding method of performing the filtration in the prediction block generated by the intra prediction can be provided. For example, filtration can be applied adaptively to a region with a large prediction error in the prediction block generated based on the reference pixels. In this case, the prediction error is reduced and the discontinuity between the blocks is minimized, thereby making it possible to improve the coding / decoding efficiency. Figure 5 is a diagram showing schematically an example of an intra prediction method with a planar morin. 510 of Figure 5 shows an example of an intraprecision method in a planar mode, and 530 of. Figure 5 shows another example of an intra prediction method in a planar mode. 515 and 535 of Figure 5 show a target coding / decoding block (hereinafter, the target coding / decoding block has the same meaning as the current block), and each of the block sizes 515 and 535 is nS x nS < In Figure 5, the pixel positions in the current block can be 30/104 represented by a predetermined coordinate. For convenience, a left uppermost coordinate in the current block c (0.0). At this ease, on a coordinate axis, a y value can increase in a downward direction, and an x value can increase by 5 in a right direction. In the examples to be described below, the pixel coordinates can be represented by the same coordinate axis as the coordinate axis used in Figure 5. As an example, referring to 510 in Figure 5. the encoder and decoder can derive a pixel value from a prediction pixel 10 to a pixel (nS-1. NS-1) positioned in the lowest right part of the block current, i.e., a prediction pixel lower than right 520. The encoder and decoder can derive from the pixel values of the prediction pixels for pixels with a vertical line positioned at the rightmost part. right, in the current block, that is, vertical line prediction pixels 15 to the right, based on a reference pixel 523 positioned in the rightmost part (nS-1, -1) between the reference pixels above and the pixel lower-right prediction 520, and derive from the pixel values of the prediction pixels for pixels on a horizontal line positioned at the bottom of the current block, that is, lower horizontal 20-line prediction pixels, based on a pixel reference 526 positioned at the lowest part (-1, nd-ll between the reference pixels on the left and the pixel, from prediction lower to right 520. Here, the prediction values for the remaining pixels except for the pixels on the right vertical line and the pixels on the bottom horizontal line between the pixels in the current block can be obtained by applying weights based on the reference pixel above, the reference pixel. The 31 / left, the vertical line to the right prediction pixel, and c lower horizontal line prediction pixel, As another example, the encoder and decoder can also be derived from a prediction value for a prediction target pixel 540 5 in the current block 535 by a method shown in 530 of Figure 5. In 530 of Figure 5, a coordinate, of the prediction target pixel 540 is (x, y). Referring to 530 da. Figure 5, the encoder and decoder can be derived from the prediction value of the prediction target pixel 540 by performing the average and / or weighted average based on a reference pixel (-1, nS) 541. positioned on the higher end between the reference pixels below on the left, a reference pixel (-1, y) 543 positioned on the same line, horizontal as the prediction target pixel 540 between the reference pixels on the left, a reference pixel (x , 1) 545 positioned on the same vertical line as the target building pixel 540 between the reference pixels above, and a reference pixel (nS, -1) positioned on the left most of the reference pixels above on the right. Figure 6 is a flowchart showing schematically an example of an image encoding method according to the exemplary embodiment of the present invention. Referring to Figure 6, the encoder can perform an intra prediction in a target coding block to generate the prediction block (S6.W). Since the specific example of the prediction block generation method has been described with reference to Figures 4A and 4B, a description of it will be omitted. 10/324 Again referring to Figure 6, the encoder can perform filtering on the prediction block based on the target coding block and / or the coding parameters of the neighboring blocks adjacent to the target coding block (S620). Here, the encoding parameter 5 can include information that can be inferred during an encoding or decoding process as well as information that is encoded in the encoder and transmitted to the decoder, such as a syntax element, and means the information required when the image is encoded or decoded. The encoding parameter can include, for example, information in an intra / inter prediction mode, a motion vector, a reference image index, a coded block pattern (CEP), whether or not there is a residual signal, a quantization parameter, a block size, block division, and the like. As an example, the encoder can perform filtering on the prediction block 15 based on information in an intra prediction mode of the target coding block, if the target coding block is the luminance block or the chroma block, a size ( and / or a depth) of the target coding block, the coding parameters (for example, coding modes of neighboring blocks) of the neighboring blocks adjacent to the target coding block, whether or not there are neighboring blocks (and / or if neighboring blocks are available blocks) and the like, Although the case in which the encoder always performs the filtration is described in the process of performing the filtration described above, the encoder may also not perform a filtration in the prediction block. For example, the encoder can determine whether or not filtration is performed based on the target coding block and / or the coding parameters of the neighboring blocks adjacent to the target coding block and may not perform filtration on the prediction block in the case at hand. that it is determined that filtration is not carried out. However, the filtration process described above can be an independent process separate from the prediction block generation process. However, the filtration process can also be combined with the prediction block generation process to be performed as a single process, ie, the encoder can also generate the prediction block by applying ·· if an e-response ·· process 0 giving to the process of performing filtration based on the target coding block and / or the coding parameters of the neighboring blocks in the process of generating the prediction block. A specific example of the method of carrying out the filtration will be described below. Again, referring to Figure 6. the encoder can generate a ü residual block based on an original block corresponding to the position of the target coding block and the prediction block (S630). Here, the prediction block can be the prediction block on which filtration is performed or the prediction block on which filtration is not performed. Figure 7 is a diagram showing, schematically, an example of a process for generating a residual block. 710 of Figure 7 shows an example of a process for generating a residual block based on the original block and the prediction block on which filtration is performed. In 710 of Figure 7, a block 71.3 indicates the original block, a block 716 indicates the prediction block on which filtration is performed, and a block 7.19 indicates the residual block. Referring to 710 in Figure 7, the encoder and decoder can generate the residual block M / 1 (M subtracting the prediction block in which filtration is performed from the original block. 720 of Figure 7 shows an example of a process for generating a residual block, based on the original block and the prediction block in which the filtration is not carried out. Km 7'2 (1 in Figure 7, a block 723 indicates the original block, a block 726 indicates the prediction block in which filtration is not carried out, and a block 729 indicates, the block Referring to 720 in Figure 7, the encoder and decoder can generate the residual block by subtracting the prediction block in which filtration is not performed from the original block. The generated residual block can be subjected to processes such as a transformation process, a quantization process, an entropy coding process, and the like, and then be transmitted to the decoder. Figure 8 is a flowchart showing schematically an example of an image de-coding method according to the exemplary embodiment of the present invention. Referring to Figure 8, the decoder can perform the intra prediction in the target block of the decoding to generate the prediction block (S810). Since the specific example of the prediction block generation method has been described with reference to Figures 4A and 4B, a description of it will be omitted. Again referring to Figure 8, the decoder can perform filtering on the prediction block based on the target decoding block and / or encoding parameters of the neighboring blocks adjacent to the target decoding block (S820). Here, the encoding parameter 35/104 can include information that can be inferred during an encoding or decoding process as well as information that is encoded in the encoder and transmitted to the decoder, such as a syntax element, and means, information required when the image is encoded each or decoded. The encoding parameter can include, for example, information in an intra./in mode having prediction, a motion vector, a reference image index, a. coded block standard (CBP), whether or not there is a residual signal, a quantization parameter, a block size, block division, and the like. K) As an example, the decoder can perform a filtration in the prediction block based on the information of an intra prediction mode of the decoding target block, if the decoding target block is the luminance block or the chroma block, a size (and / or a depth) of the decoding target block, the encoding parameters (for example, encoding modes of neighboring blocks) of neighboring blocks adjacent to the decoding target block, whether or not neighboring blocks (and / or whether or not neighboring blocks are available), and the like. Although the case in which the decoder always performs filtration 20 is described in the process of performing the filtration described above, the decoder may also not perform one. filtration in the prediction block. For example, the decoder can determine whether or not the filtration is performed based on the target decoding block and / or the encoding parameters of the neighboring blocks adjacent to the decoding target block 25 and may not perform a filtration on the prediction block in the case where it is determined that filtration is not carried out. 10/36 However, the filtration process described above can be an independent process separate from the prediction block generation process. However, the filtration process can also be combined with the generation process of the prediction block to be carried out as a single process. That is, the decoder can also generate the prediction block by applying a process corresponding to the process of performing filtration based on the decoding target block and / or the encoding parameters of the neighboring blocks in the prediction block generation process. In this case, the decoder may not perform a separate filtration process on the prediction block. The method of using filtration in the decoder can be the same as the method of performing filtration in the encoder. A specific example of the method of performing the filtration will be described below. Again, referring to the Figure. 8, the decoder can generate a reconstructed block based on a residual block. Reconstructed corresponding to the position of the target decoding block and the prediction block (3830). Here, the prediction block can be the prediction block 30 on which filtration is performed or the prediction block on which filtration is not performed. Figure 9 is a diagram showing a schematic diagram showing an example of the residual block generation process. 9.10 da. Figure 9 shows an example of a process for generating a rebuilt block based on the reconstructed residual block and the prediction block on which Filtration is performed. In 010 of Figure 9, a block 913 indicates the reconstructed residual block, a block 916 indicates the prediction block on which filtration is performed, and a block 919 indicates the reconstructed block. Referring to 91.0 of Figure 9, the encoder and the decoder can generate the reconstructed block by adding the reconstructed residual block and the prediction block in which the μlation is performed to each other. 920 of Figure 9 shows an example of a process for generating a reconstructed block based on the reconstructed residual block and the prediction block on which filtration is not performed. In 920 of Figure 9. a block 923 indicates the reconstructed residual block, a block 10 926 indicates the prediction block in which filtration is not performed, and a block 929 indicates the reconstructed block. Referring to 920 of Figure 9, the encoder and dcoder can generate the residual block by adding the reconstructed residual block and the prediction block in which filtration is not performed on anything else. Figure 10 is an íluxograrna. which shows, schematically, an example of a method of performing filtration according to the exemplary modality of the present invention, Referring to Figure 10, the encoder and dcoder can determine whether or not filtration is performed on the prediction block (and / or the 20-pixel prediction) (81010). As described above, the encoder and decoder can perform an intra prediction in the coding / coding target block based on previously reconstructed reference pixels. Here, a reference pixel used for the intra prediction and / or a pixel of 25 prediction value in a prediction block generated in the intra prediction can be changed according to the current block's intra prediction mode. Therefore, in this case, the encoder and decoder perform the filtering in prediction pixels having a small correlation with the reference pixel used for the intra prediction, thus making it possible to reduce a prediction error. On the other hand, it may be more efficient 5 not to perform pixel filtration having a great correlation with the reference pixel used for the intra prediction. Therefore, the encoder and decoder can determine whether or not filtering is performed on the prediction block (and / or on the prediction pixel) based on at least one of the information in the intro mode. prediction of the target coding / decoding block, st 'the target coding / decoding block is the Inminence block or the crema block, the size (and / or depth) of the target coding / decoding block, the coding parameters (for example, the sizes of neighboring blocks, the encoding modes of neighboring blocks, and the like) of neighboring blocks adjacent to the target encoding / decoding block. whether or not there are neighboring blocks (and / or whether or not neighboring blocks are available blocks). If. whether or not filtration is performed can be determined in the encoding / decoding process or be determined in advance according to each HO condition. Here below, specific examples of a method for determining whether or not filtration is carried out will be described. As an example, the encoder and decoder can determine whether or not filtration is performed on the prediction block based on the intraspection mode of the target encoding / decoding block. As described above, the reference pixels and prediction directions used for the intra prediction can be changed according to the mode of 39/104 intra prediction of the target coding / decoding block. Therefore, it can be efficient to determine whether or not the filtration is carried out based on the intra prediction mode of the target coding block. ϋ Table 1. below shows an example of a method. determining whether or not filtration is performed according to the intra prediction mode. In Table 1, it is assumed that the prediction directions of the intra mode, predictions and the value of modes allocated to each of the prediction modes were defined as shown in 410 of the Figure 4A. Table 1 Intra mode 1 1 2 '2 .2 2 2 2 2 2 2 2 3 3 3 3 3 prediction 8 9 0 1 2 3 4 5 6 T 8 9 0 1 2 3 4 Whether or notfiltration is 0 0 ( 0 0 I. 1 1 I 0 0 0 0 1 1 1 í 1 fulfilled __________ 10/40 On here. 0 among the values allocated to the intra prediction mode may indicate that the filtration is not performed, and 1 among them may indicate that the filtration is performed, As an example, in the case where the prediction mode of the current block is a DC mode (for example, a prediction mode having a mode value of 2), since the prediction block is generated by averaging the values of pixel of a plurality of reference pixels, the correlation between the prediction pixels and the reference pixels becomes! 0 small. However, in this case, the encoder and deeodiflisher can perform filtering on the prediction pixels in the prediction block. As another example, in the case where the prediction mode of the current block is a planar mode (for example, a prediction mode having a mode value of 34), as described above with reference to. Figure 5, the prediction pixels of the vertical line on the right and the line prediction pixels. lower horizontal are derived and weights are applied based on the predicted pixels derived and the reference pixels, thereby making it possible to derive the prediction values for each pixel in the current block. However, in this case, since the correlation between the prediction pixels and the sc reference pixels becomes small, the encoder and decoder can filter the prediction pixels in the prediction block. As yet another example, in the case where the current block intra prediction mode is a vertical straight mode (for example, a prediction mode having a mode value of 5, 6, 1.2, 13, 22, 23, 24, and 25), since the encoder and decoder perform a. intra predi 41/104 tion in the current block using the reference pixels above with / or the reference pixels above on the right, the correlation between the predicted pixels positioned was a region on the left in the prediction block and the reference pixels on the left., it can become small. Meantime. in this case, a. filtration can be performed on the pixels positioned in the region to the left of the prediction block. As yet another example, in the case where the intro prediction mode of the current block is a horizontal mode below (for example, a prediction mode having a dc mode value of 8, 9, 16, 17, 30, 31, 32, and 33), since the encoder and decoder 10 perform the intra prediction in the current block using the reference pixels on the left and / or the reference pixels below on the left, the correlation between the prediction pixels positioned in an upper region in the prediction block and the reference pixels above, it can become small. However, in this ea.su, filtration can be performed on the pixels positioned in the upper region in the prediction block < In addition, the encoder and decoder can also perform filtering in a vertical mode (for example, a prediction mode having a value of dc mode 0) and a horizontal mode (for example, a prediction mode having a value of dc mode 1), different from the example in Table i. In the case where the current block intra prediction mode is the vertical mode, since the encoder and decoder perform intra prediction in the current block using the reference pixels above, a correlation between the prediction pixels positioned in the 25 left region in the prediction block and the left reference pixels may become small. However, in this case, filtration can be performed on the pixels positioned in the region to the left in the 42/104 prediction block. As yet another example, in the case where the current block intra prediction mode is a horizontal mode (e.g., a prediction mode having a mode value of 1), one. Since the encoder and decoder perform intra prediction in the current block 5 using the reference pixels on the left, a correlation between the prediction pixels positioned in the upper region in the prediction block and the reference pixels above, can become small. However, in this case, filtration can be performed on the pixels positioned in the upper region of the prediction block. However, in the case where the current block's intra prediction mode corresponds to one of the prediction modes (for example, prediction modes having mode values of 3, 4, 7, 10, 11, 14, 15. 18, 19 , 20, 2.1, 26, 27, 28, and 29) different from the prediction modes mentioned above, the encoder and decoder can use at least one of the 15 reference pixels above and the reference pixels above right for the intra prediction and use at least one of the reference pixels on the left and the reference pixels below on the left for. intra prediction. However, in this case, since all the prediction pixels positioned in the region on the left and in the upper region in the> 0 prediction block can maintain a. correlation with the reference pixels, the encoder and the decoder cannot perform filtering on the prediction block. In each case in which filtration is carried out, the regions in which filtration is carried out in the current block and / or in the prediction block and / or pixel positions in which filtration is carried out in the current block, will be described below . As another example, the encoder and decoder can determine whether or not filtration is performed on the prediction block based on the size and / or depth of the current block (and / or the target prediction block). Here, the current block can correspond to at least one of the CU, PU, and. the TU. The following Table 2 shows an example of a method of determining whether or not filtration is performed according to a block size, and the following Table 3 shows an example of a method of determining whether or not filtration is performed according to a block size. a value K) depth of a current block. In the examples in Tables 2 and 3, the current block can correspond to the TU, a size of the TU can be, for example, 2x2, 4x4, 8x8, 16x16, 32x32, 64x64, or similar. However, the present invention is not limited to this. This is. the current block can correspond to CU, PU, or similar, except TU. ! 5 Table 2 | Block Size i 2x2 i 4x4 i 8x8 I 1.6x16 i 32x32 I 64x64I | Whether or not filtration is performed.- · I | I I ί I | I 0 | 1] 1 l 1 I 1. i0 [ I c ' a ΐ II | I i Table 3 í Depth value 1 0 i I i 2 3 | 4 [5 [-..-. --- ................................ ί.-..- .-----------------— pi Whether or not filtration is performed j 1 | .1 i 1] 0 i 0 | O On here. 0 among the values allocated to the intra prediction mode can 44/104 indicate that filtration is not performed, and 1 among them may indicate that filtration is performed. The encoder and decoder can also determine whether or not filtration is performed on the current block and / or on the prediction block 5 considering both the current block's intra prediction mode and the current block size. That is, the encoder and decoder can determine whether or not filtration is performed based on the current block size with respect to each of the intra prediction modes. In this case, whether or not the filtration is performed can be determined to be different by the inter-prediction mode according to the current block size. The following Table 4 shows an example of a method for determining whether or not filtration is performed according to the intra mode, prediction of the current block and the size of the current block. Table 4 Intra prediction mode 0 1 2 3 4 5 6 7 8 9 10 .11 12 13 14 15 16 172x2 0 Q 0 0 0 0 0 0 0 0 Q 0 0 0 0 0 0 04x4 0 0 1 0 O 10 /1 1 0 0 1 I 0 0 1 / 11 Size 8x8 0 0 1 0 0 1. 1 0 iJil................. 1 0 0 .1 .1 0 0 1 .1 of the block l5x 1 6 Q 0 1 Q 0 1 1 0 Ill l 0 0 1 .1 0 O 1111 11132x32 0 0 0 0 0 1 1 0 1 1 0 0 1 1 0 0 1 164x84 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 45 / 16 19: 20 21 22 23 24 25 26 27 25 29 30 31 32 33 342x2 0 0 0 0 0 0 0 01 0 99 Q 0 0 0 0 0 0 4x4 0 0 0 0 1 1 I 0 0 0 t .1 I I 1 3x3 9 0 0 9 (THE:: ( 2p i 1I 1 0 U 0 0 1 1 : / 4: ·: · l 1 : 3 |, :: 6x; f> 9 0 0 0 1 1 1 0 0 O: 0 1 1 Ιυ) 1 1 32x32 9 0 0 9 1 (| 1 | i 1 jl H : Q 9 0 0 lb: 1 | :::) 1 Q .. 64x64 : 9 0 0 0 il 9 9 0 0 0 0 1 Ésti 0 9 0 0 0 Here, 0 among the values allocated to each of the intra prediction modes asks to indicate that filtration is not performed, and among them it can indicate that filtration is performed. As yet another example, the encoder and decoder can determine whether or not the filtering is performed on the prediction block based on the information in which the current block corresponds to the luminance block or corresponds to the chroma block, that is, information in a color component of the current block. For example, the encoder and decoder may filter the prediction block only in the case where the current block corresponds to the luminance block and may not perform filtering in the prediction block in the case where the current block corresponds to the luminance block. chroma. As yet another example, the encoder and decoder can also determine whether or not filtration is performed based on information about the encoding parameters of neighboring blocks adjacent to the current block, whether or not the restricted intra prediction (CIP) is applied to the block. current, whether or not there is. neighboring blocks (and / or whether or not neighboring blocks are available blocks}, and the like. An example 46/104 specific method for each method to determine whether or not filtration is performed, will be described below. Again referring to Figure 10, in the case where it is determined that filtration is performed, in the current block and / or in the prediction block, the encoder and decoder can determine a. region in which filtration is performed in the current block and / or in the prediction block (SI020}. Here, the region in which the filtration is performed can correspond to at least one sample in the current block and / or in the prediction block. As described above, c <encoder and decoder perform filtering on prediction pixels having a small correlation with the reference pixel used for intra prediction, thereby making it possible to reduce a prediction error. That is, the encoder and decoder can determine that one is. region having a relatively large prediction error in the current block and / or in the prediction block is the filtration realization region. In this case, the encoder and decoder can determine the filtering region based on at least one within the current block prediction dc mode, the current block size (and / or depth), and the encoding mode 20 of the neighboring blocks adjacent to the current block Here, the encoding mode of the neighboring blocks can indicate whether the neighboring blocks are encoded / decoded in the inter mode or are encoded / 'decoded in the intra mode. Specific examples of a method for determine the region of filtration to be performed, will be described below. Furthermore, the encoder and decoder can determine a 47 / iU4 type of filter applied to each of the prediction pixels in the filtering region (S 1.030). Here, the type of filter can include information on a filter shape, a filter tap, a filter coefficient, and the like. One 7; The plurality of intra prediction modes could have different prediction directions, and a method of using a reconstructed reference pixel can be changed according to the target pixel positions of filtration. However, the adaptive mode encoder and decoder determines the type of filter, thereby making it possible to improve filtration efficiency. For example, the encoder and the deode decoder can determine the type of filter applied to the target pixel target based on the current block prediction dc mode, the current block size (and / or depth), and / or the filtering pixel positions. An example of the filter shape can include a horizontal shape, a vertical shape, a shape, diagonal, and the like, and an example of the filter tap can include a 2-tap filter, 3 taps, 4 taps, and the like. In addition, the encoder and deeodifier can determine the filter coefficient based on the prediction block size, the 20 positions of the target filtration pixels, and the like. That is, the modem encoder and deode decoder change the filter coefficient applied to the target filter pixels, according to the prediction block size, the positions of the target filter pixels, and the like. However, a. filtration force for the target filtration pixels can be determined adaptively. As an example, in the case where the 2 tap filter is used, the filter coefficient can be 11: 3J, 11; 7], 1'3: 51, or similar. How 48 / UH another example, in the case where the 3-tap filter is used, the filter coefficient can be [1: 2: 11, [1: 4: 1], [1: 6; 1], or similar. However, the filter determined by the type of filter may also not be a filter defined by the shape of the tilt.ro, the filter tap, 5 the filter coefficient, or similar. For example, the encoder and decoder can also perform a filtering process by adding a deviation value determined by a predetermined process to the pixel values of the reference pixels. In this case, the filtration process can also be combined with the 10-generation prediction block process to be carried out as a single process. That is, the predicted value pixels filtered from each of the pixels in the current block can be derived only by the above mentioned filtering process. In this case, the filtration process mentioned above can correspond to a single process including both the prediction pixel generation process and the filtration process for the generated prediction pixels. Specific examples of molasses to determine a type of filter, will be described below. After the filtering region and the filter type have been completed, the encoder and decoder can perform filtration on each of the prediction pixels in the prediction block based on the filtering region and the type of filter (S .1041)). In the event that it is determined that filtration is not performed in the prediction block, the encoder and decoder cannot perform filtration in the prediction block either (and / or each of the prediction pixels in the prediction block): (SlObOj , 49/104 Figure 1.1 is a diagram showing schematically an example of a method of determining whether or not filtration is performed based on the encoding parameters of neighboring blocks adjacent to a current block. In Figure 1.1, the encoding parameters of the neighboring blocks can include an intra prediction mode, one. inter-prediction mode, an encoding mode, or the like. Here, the encoding mode of neighboring blocks can indicate whether neighboring blocks are encoded / decoded in the inter mode or are encoded / decoded in the to intra mode, HO in Figure 11 shows an example of a method of determining whether or not filtration is performed based on the intra prediction mode of the neighboring block adjacent to the current block, 1 11 3 in Figure 11 indicates the current block C, and 1116 in Figure 1.1. indicates an A 15 block next to the left adjacent to the left of the current block. In II10 of Figure 1.1, it is assumed that the current block's intra prediction mode corresponds to the vertical prediction mode. In this case, once the encoder and decoder perform the inference prediction in the current block using the reference pixels above, and / or the reference pixels above 20 on the right, filtering can be performed on the pixels positioned in a region ã left 1.1.19 in the prediction block. However, in the case where the prediction direction of block A on the left 1116 adjacent to a target filtration region 1119 and a prediction direction of the current block (1 1. '11 .3 are different from each other as shown in 1110 of Figure 11, it may be more efficient not to perform a filtration in the target region of filtration 1119. Enter / go as much, in the case that the prediction direction of the neighboring block 1116 adjacent to the target filtration region 1 119 and the prediction direction of the block currents C 1 1 13 are different from each other, the encoder and the deode decoder may not perform filtration in the target filtering region. · 5 11.19 On the contrary, in the case where the prediction direction of neighboring block 1116 adjacent to the region filtration target 1119 and the prediction direction of the current block C 1113 are the same or similar to each other (for example, in the case where a difference value between the prediction angles is a limit value) predetermined or less), the encoder and decoder 10 perform filtration in the target filtration region 11 19, thereby making it possible to reduce the prediction error. 1120 of Figure 11 shows an example of a method of determining whether filtration is performed based on the encoding mode of the neighboring block adjacent to the current block in the event that the restricted intra prediction (CIP) is applied to the current block. 1123 of Figure 11 indicates the current block C, and 1126 of Figure 11 indicates a block A neighboring the left adjacent to the left of the current block. In 1.1. 20 of Figure 11, it is assumed that the current block's intra prediction mode corresponds to the vertical prediction mode. In this case, since the encoder and decoder perform an intra prediction in the current block using the reference pixels above and / or the reference pixels above right, filtering can be performed on the pixels positioned in a region to the left 1129 in the prediction block. However, in the case where the CIP is applied to the current block C 25 1123, the encoder and the decoded! · Also cannot perform filtration in a target region of filtration 1129 according to the mode of 51/104 coding of block A next to the left 1126 adjacent to the target region of filtration 1129. In the case where the CIP is applied to the current block 1123, the encoder and the decoder may not use the pixels in the neighboring block t encoded in the way that the reference pixels in the. realization of the intra prediction in the current block 1123, For example, in 1.20 of Figure 11, in the case in. that block A neighboring to the left 1.1.26 is encoded in the intermodal, the reference pixels in the block neighboring to. left 1126, that is, the left reference pixels may not be used 10 for the prediction of the current block 1123. In this case, the encoder and decoder can fill the positions of the left reference pixels with the pixel values of the reference pixels in the coded block in the intra mode and then perform the intra prediction. That is, the encoder and decoder do not use the pixels to which the inter mode is applied for intra prediction, thereby making it possible to highlight the. resistance against error. However, in the case where the CIP is applied to the current block 1123 and the encoding mode of the neighboring left block 1126 adjacent to the target filtering region 1129 is the inter mode as shown in 20 1120 of Figure 11, the encoder and decoder cannot filter in the target filtration region 1.129. Figure 12 is one. diagram showing schematically an example of a method of determining whether or not filtration is performed based on the information whether or not neighboring blocks adjacent to the current block are present (and / or whether or not neighboring blocks are an available block) . / 104 1210 of Figure 12 indicates the current block C, and 1220 of Figure 12 indicates a neighboring block A adjacent to the left of the current block. In Figure 12. it is assumed that the intra-prediction mode of the current block 1210 corresponds to the vertical prediction mode. In this case, once the encoder and decoder perform an intra prediction in the current block using the reference pixels above and / or the reference pixels above to the right, filtering can be performed on the pixels positioned in one. region to the left. 1230 in the prediction block. However, in the event that the neighboring block adjacent to the alloy filtering region is not present or is not available, the encoder and decoder cannot perform a. filtration on. filtration target region. Here, an example of the case in which the neighboring block adjacent to the target filtration region is not present or is not available, there is a case in which the current block is present at the limit of a current image, a case in which the block neighbor adjacent to the current block is present on the outside side of a portion threshold to which the current block belongs, and the like. In the event that the neighboring block adjacent to the target filtration region is not present or unavailable, the encoder and decoder can generate reference pixel values from positions adjacent to the target filtration region using available reference pixels and then perform the intra prediction. However, in this case, since one. plurality of generated reference pixels can have similar values to each other and the values of the generated reference pixels 25 can be similar to the pixel values in the current block, when filtering is performed in the current block based on the reference pixels 53/104 generated, the coding efficiency can be reduced. However, the encoder and decoder cannot perform filtration in the target filtration region. Referring to Figure 1.2, the reconstructed blocks B and D are present around the current block C 1,210. In addition, a block A neighboring to the left '1220 adjacent to the target filtration region 1230 in the current block 1210 is present outside a boundary 1240 of a portion to which the current block 121.0 belongs. In this case, since block A next to the left 1220 adjacent to the target region of filtration 1231) corresponds to an unavailable region, the encoder and decoder cannot perform the filtration in the target region of filtration 1230. Figure 13 is a diagram showing schematically an example of a method of determining the region of conducting the filtration based on an intra-prediction mode of the current block. As described above, the coded: 'and the decoder can perform the intra prediction in the target coding / decoding block based on the reconstructed reference pixels. In this case, since the reference pixel and / or the prediction direction used 20 for the intra prediction can be changed according to the current block's intra prediction mode, it can be efficient to determine that a region having a prediction error relatively large is the region where filtration is performed, considering the current block's intra prediction mode. More specifically, the prediction pixels positioned at a region adjacent to the reference pixels that are not used for the prediction inside the prediction block may have a low correlation W104 with reference pixels and a big prediction error. However, the encoder and decoder perform a. filtering on the prediction pixels in the region adjacent to the reference pixels that are not used for the first prediction between the prediction pixels in the prediction block, thereby making it possible to reduce the prediction error and improve the efficiency of the prediction. .1310 of Figure 13 shows an example of a filtering region in the case where the prediction mode of the current block is DC mode and / or planar mode. In 1310 of Figure 13, 1313 can indicate a prediction block, and 1316 can indicate a region for performing filtration. As described above, in the case where the prediction mode of the current block is DC mode, one. Since the 1313 prediction block is generated by averaging the pixel values of the plurality of reference pixels, the correlation between the prediction pixels and the reference pixel becomes small. However, in this case, the encoder and decoder can determine that at least one horiacmtal pixel line (hereinafter referred to as an upper horizontal prediction pixel line) positioned at the top of the prediction block 1313 30 and at least the vertical pixel line (hereinafter referred to as a left vertical prediction pixel line) positioned at the leftmost part of the prediction block 1313 is the region for filtering 1316. Here, the number of lines of horizontal pixels included in the line, from the top horizontal prediction pixel and the number of vertical pixel lines 25 included in the left vertical prediction pixel line can be a predetermined fixed number. 55 / W4 each of the top horizontal prediction pixel line and the vertical prediction pixel line on the left can include a pixel line. In addition, as in an example in the Figure. 14 described below, the number of pixel lines included »in the top horizontal prediction pixel line 5 with the number c pixel lines included in the left vertical prediction pixel line can also be determined based on the current block sizes and / or the 1313 prediction block. That is, the number of pixel lines included in the top horizontal prediction pixel line and the number of pixel lines included in the left vertical prediction pixel line can be varied according to current block sizes and / or the 1313 prediction block. For example, each of the number of pixel lines included in the upper horizontal prediction pixel line and the number of pixel lines included in the left vertical prediction pixel line can 15 is 1, 2. 4, or similar. However, even though the prediction mode of the current block is planar mode (the prediction mode having the mode value of 34), a correlation between the prediction pixels and the reference pixel, may be small, however, in this case , the encoder and decoder 20 can determine that the upper horizontal prediction pixel line and the left vertical prediction pixel line are the region of filtering 131.6, as in DC mode. 1320 of Figure 13 shows an example of a region for performing filtration in the case where the dc mode; intra prediction of the current block 25 and the mc> of the vertical right (for example, the prediction mode having the mode value of 5, 6, 12. 13, 22, 23, 24, and 25). In 1320 the 56/104 Figure 13, 1323 can indicate - a prediction block, 1326 can indicate a region of filtration. In the case where the prediction mode of the current block is the vertical right mode, since the encoder and decoder perform an intra prediction in the current block based on the reference pixels above and / or the reference pixels above on the right, a correlation between the prediction pixels positioned in the region on the left in the prediction block 1323 and the reference pixels on the left, may become small. However, in this case, the encoder and decoder determine that at least one vertical pixel line is positioned at the leftmost part of the prediction block 1323, i.e., the vertical prediction pixel line on the left is the region for filtering. 1326 and performs filtration, thereby making it possible to improve the prediction efficiency. In this case, the number of vertical pixel lines included in the vertical prediction pixel line on the left can be a predetermined fixed number. For example, the vertical prediction pixel line on the left can include a vertical pixel line. In addition, as an example of Figure 14 described below, the number of vertical pixel lines included in the left vertical prediction pixel line can also be determined based on the sizes of the current block and / or the prediction block 1323 That is, the number of vertical pixel lines included in the vertical prediction pixel line on the left can be variable according to the sizes of the current block and / or the prediction block 1323 and be, for example, 1.2, 4, or similar. However, in the case where the prediction mode of the current block is the vertical mode, since the encoder and decoder perform 57/104 a large prediction in the current block using the reference pixels above, a correlation between the prediction pixels positioned in the region on the left in the prediction block and the reference pixels on the left may become small. However, even in this case, the encoder and the decoder can determine that the vertical prediction pixel line on the left is the region of filtering and perform the filtration. 1330 of Figure 13 shows an example of a filtering region in which the current block 0 intra prediction mode is the horizontal mode below (for example, the prediction mode having the value of 8, 9, 1.6, 17, 30, 31, 32. and 331. In 1330 of Figure 13, 1333 can indicate a prediction block, and 1,336 can indicate a filtering region. In the case where the current block intra prediction mode is the 5th horizontal mode below, since the encoder and decoder perform an intra prediction in the current block using the left reference pixels and / or the reference pixels below on the left, a correlation between the prediction pixels positioned in the upper region in the prediction block 1333 and the reference pixels above can: 0 become small. However, in this case, the encoder and the decudifier determine that at least one horizontal pixel line is positioned at the uppermost part of the .1333 prediction block, that is, the upper horizontal prediction pixel line is the 1336 filtering region. and performs the filtration, thus making it possible to improve: ensure the efficiency of the prediction. In this case, the number of horizontal pixel lines included in the top horizontal prediction pixel line 53 / .104 can be a predetermined fixed number. For example, the top horizontal prediction pixel line can include a horizontal pixel line. In addition, as in an example of Figure 14 described below, the number of horizontal pixel lines included in the top 5 horizontal prediction pixel line can also be determined based on the sizes of the current block and / or the prediction block 1333. That is, the number of horizontal pixel lines included in the line, from the upper horizontal prediction pixel can be variable according to the sizes of the current block and / or the prediction block '1333 and be, for' The example, I, 2, 4, or similar. However, in the case where the prediction mode of the current block is the horizontal mode, since the encoder and decoder perform an intra prediction in the current block using the reference pixels on the left, a correlation between the predicted pixels positioned 15 in the .superior region in the prediction block and the reference pixels above, it can become small. However, even in this case, the encoder and decoder can determine that the top horizontal prediction pixel line is the region where the filtration is performed, and perform the filtration. Figure 14 is a diagram that shows and schematically an example of a method for determining one. filtration region based on a current block size and / or depth. In the case where a size of the current block (and / or the prediction target block 25) is large, a size of a region having a large prediction error in the current block can also be large, and wim in the case where the size of the current block (c / or the prediction target block) is small, the size of the. region having the big prediction error in the current block, it can also be small. However, the encoder and dcoder determine the region of filtration 5 based on the size (and / or depth) of the current block (and / or the prediction target block), thereby making it possible to improve the efficiency of the encoding. In this case, the encoder and dcoder can determine that a region that tends to have a relatively large prediction error is the region of filtering, LO 1410 of Figure 14 shows an example of a filtering region in the case where a current block size is 8x8. In 1410 of Figure 14, 1413 indicates a current block, and 1416 indicates a region for performing filtration. In 1410 of Figure 14, it is assumed. that an intra-prediction mode of the current block 1413 15 corresponds to the vertical right mode (for example, the prediction mode having the mode value of 6), in this case, since the encoder and the decoder perform an intra prediction in the block current using the reference pixels above and / or the reference pixels above right, a prediction error of a region on the left so that a distance 20 of the reference pixels above and / or the reference pixels above right is far from the prediction block, it can be big. However in this case, the encoder and dcoder can determine that at least one vertical pixel line is positioned on the. I leave more to the left, in the prediction block, that is, the Nail of the pixel of the vertical prediction to the left, be the region of accomplishment of filtration 1416. 1420 of Figure i.4 shows an example of a region for performing 60 / limM of filtration in the case where a size of a current block is 32x32. In 1420 of the Figure. 14, 1423 indicates a current block, and 1426 indicates a target region of filtration. In 1420 of Figure 14, it is assumed that an intra, prediction mode of the current block 1423 corresponds to the vertical right mode (for example, the prediction mode having the value of fashion 6). In this case, since the encoder or the decoder perform an intra prediction in the current block using the reference pixels above and / or the reference pixels above right, a prediction error of a region on the left so that a distance from the w pixels above, and / or the reference pixels above right, is far from the prediction block. it can be great. However, in this case, the encoder and decoder can determine that at least one vertical pixel line is positioned at the leftmost part of the prediction block, i.e., the vertical left pixel line 15 is the realization region of the prediction block. filtration 1426. In 1410 or 1420 of Figure 1-4, the number of vertical pixel lines setting the vertical prediction pixel line to. left can be determined based on the current block sizes 1413 or 1423 and / or the prediction block. In 1410 of Figure 14, the size of the current block is 8x8, which is a relatively small value. However, in this case, since the size of the region having the large prediction error can be small, the encoder and decoder can determine that the two vertical pixel lines positioned in the leftmost part of the prediction block are the region of performing 25 filtration. On the other hand, at 1420 of Figure 14, the current block size is 32x32, which is a relatively large value. However, in this case, once. that the size of the region having the big dc error 61/161 prediction can be large, the encoder and decoder can determine that four vertical pixel lines -dc positioned at the left most part of the prediction block are the region of the filtering. The following Table 5 shows an example of one. filtration region according to a block size, and the following Table 6 shows an example of a filtration region according to a current block depth value. The encoder and decoder can determine the region for filtering based on the size and / or depth of the current block, as shown in the following Tables 5 and 6. Table 5 Block Size 2x2 4x4 8x8 16x16 32x32 64x64 Filtration Conduct Region 0x0 1x4 2x8 4x16 8x32 16x64 Table 6 Depth value 5 j 4 1 3 2 1 j 0 Filtration Conduct Region 0x0 1 4x1 8x2 16x4 32x8 1 64x16 Here, the current block corresponds to the TU, a TV size can be, for example, 2x2, 4x4, 8x8, 16x16, 32x32, 64x64, or similar. However, the present invention is not limited to this. That is, the current block can correspond to CU, PU, or similar, except TU The size and / or position of the filtration 62/164 terminated according to the size and / or depth of the current block are not limited to the examples mentioned above, however, they can also be determined to be of a size and / or position different from those of the examples mentioned above. In addition, in the examples above, the method of determining the region for performing filtration based on the vertical right mode has been described for convenience of explanation. However, the method for determining a filtering region can also be similarly applied in the case where the prediction mode corresponds to the to modes except the vertical mode. The figure. 15 is a diagram showing schematically an example of a method for determining the region of performing filtration based on a coding mode of the neighboring blocks adjacent to the current block, In Figure 15, it is assumed that the intra-prediction mode of the current block C 1510 corresponds to the vertical prediction mode. In this case, since the encoder and decoder perform an intra prediction on the current block using the reference pixels above and / or the reference pixels above on the right, a filtration can be performed on the pixels 20 positioned in the region on the left. 1129 in the prediction block. However, in the case where the encoding mode of the neighboring block adjacent to the current block is the internal mode, it is very likely that the pixel values recovered in the neighboring block are not reliable due to an error generated in a network, or similar, and when filtration is performed based on the reconstructed pixel values in the neighboring block of which the coding mode is the inter mode, the coding efficiency 63/104 reduced sc node. However, the encoder and the deeodifier do not ...... λ: :::::: ':' :: ::: '::':: : ::::::: ::::::: :: ::: can perform the filtration in a region adjacent to the neighboring block of which the encoding mode is the inter mode, that is, the encoder and the deeodifier can determine the region where the filtration is performed based on the coding mute of the neighboring block adjacent to the current block. Referring to Figure 15, as the neighboring neighbor blocks to the left of the current block 151.0, there is a reconstructed neighboring block A 1520 and a block Reconstructed neighbor B 1530 »Here, it is assumed that:> a coding mode for block A neighbor 1520 is the intra mode, and a coding mode for neighboring block B 1.530 is the inter mode. In this case, the encoder and the deode decoder can determine that only a region adjacent to the neighboring block B. 1,530 encoded in the intra mode in the left region of the prediction block is the target region of filtration. Figures 16A and 10B are diagrams showing an example of a method of determining the type of filter according to the current block's intra prediction mode. 16.10 of Figure 16A shows an example of a method of determining the type of filter in the case where the prediction mode of the current block 20 is the DC mode and / or the planar mode. In 1610 of Figure 16A, 1,615 indicates a prediction block. and 1620 is a filter tap applied to a target filtration pixel. As described above, in the case where the prediction mode of the current block is DC mode, one. Since the prediction block 1615 is generated by averaging the pixel value of the plurality of pixels of the reference 64/104, a correlation between the prediction pixels and the reference pixels becomes small. However, in this case, the encoder and decoder can determine that the prediction pixels (for example, (0.0), (1.0), (2.0), (3.0), (4.0) , (5.0), (6.0), (7.0), (0.1), (0.2), (0.3), (0.4), (0.5), 5 (0.6), (0., 7)) included in an upper horizontal prediction line (for example, a horizontal pixel line positioned at the top of the 1615 prediction block) and a vertical pixel line on the left ( for example, a vertical pixel line positioned in the leftmost part of the prediction block 1615) is the region of realization 10 of filtration. In addition, in the case where the prediction mode of the current block is planar mode, a correlation between the prediction pixels and the reference pixel may be small. However, in this case, the encoder and decoder can determine that the prediction pixels included in the top horizontal prediction pixel line and the vertical prediction pixel line 15 on the left are the region of filtering. as in DC mode. In the case where the prediction mode of the current block is DC mode and / or planar mode, the encoder and decoder can apply a filter of 3 taps 1629 of 11/4. 2/4, 1/4] to a prediction pixel 20 upper left (0.0) positioned in the upper left part of the prediction block. In this case, the encoder and decoder request to perform filtration on the target filtration pixel based on the target filtration pixel (0,0), a reference pixel (0, -1 j adjacent to an upper part of the target filtration pixel. , and a reference pixel (-.1.01 adjacent to the left of the target filtration pixel. In this case, a filter coefficient applied to the target filtration pixel can be 2/4, and a filter coefficient applied to the dc pixel reference adjacent to the top of the 55/104 filtration target pixel and the reference pixel adjacent to the left of the filtration target pixel can be 1/4, In addition, in the case where the prediction mode of the current block is DC mode and / or planar mode, the encoder and decoder can apply a horizontal 2 tap filter 1623 of | l / 4 ; 3/41 each of the pixels (for example, (0.1), (0.2), (0.3), (0.4), (0.5), (0.6), and (0 , 7)) except for the top left prediction pixel between the prediction pixels included in the vertical prediction pixel line on the left. Here, by assuming that a position of the target filtration pixel is (0, y), the encoder 50 and the decoder can perform filtration on the target filtration pixel based on the target filtration pixel (0, y) and the dc pixel. reference (- 1, y) adjacent to the left of the target filtration pixel. In this case, a filter coefficient applied to the target filtration pixel can be 3/4, and a filter coefficient applied to the reference pixel adjacent to the left 5 5 of the target filtration pixel can be 1/4, In addition, in the case where the current block prediction mode with DC mode and / or planar mode, the encoder and decoder can apply a dc filter 2 vertical taps 1625 dc | .l / 4, 3/4] to each of the pixels (e.g. (1.0), (2.0), (3.0), (4.0), (5.0), (6.0) c (7.0)) except 20 for the top left prediction pixel between the prediction pixels included in the top horizontal prediction pixel line. Here, by assuming that a position of the target filtration pixel is (x, 0), the encoder and decoder can perform filtration on the target filtration pixel based on the target filtration pixel (x, 0) and a cie pixel. reference (x, -l) 25 adjacent to. an upper part of the target filtration pixel. In this case, the filter coefficient applied to the target filtration pixel can be 3/4, and 64/104 a filter coefficient applied to the reference pixel adjacent to the top of the target filtration pixel can be 1/4. In the examples mentioned above, the encoder and decoder can also use different types of filters (for example, a filter shape, a filter tap, a filter coefficient, or the like) according to the current block size. In this case, the encoder and decoder can adaptively determine the type of filter based on the current block size. However, the encoder and decoder lick can always use one. fixed filter type pre-finished10 (for example, a filter shape, a filter tap, a filter coefficient, or the like) regardless of the sizes of the current block and / or the prediction block as in the examples mentioned above. 1.630 of Figure I6A shows an example of a two-way filter type termination method in the case where the prediction mode of the current block is the right vertical modi (for example, the prediction mode having the mode value of 5 , 6, 12, 13, 22, 23, 24, and 25). In .1630 of Figure 16A, 1635 indicates a prediction block, and 1640 is a filter tap applied to a target filtration pixel. As described above, in the case where the prediction mode of the current block is the vertical right mode, since the encoder and decoder perform an intra prediction in the current block based on the reference pixels above, and / or. in the reference pixels above on the right, the correlation between the prediction pixels positioned in the region on the left in the prediction block 1635 and the reference pixels on the left, can become small. However, in this case, the encoder and decoder can determine that the prediction pixels (for example, (0.0), (0.1), (0.2), (0.3), (0.4) , (0.5), (0.6), and (0.7)) included with a vertical prediction pixel line on the left (for example, a vertical pixel line positioned in the leftmost part of the prediction block5 1635) are in the filtration region. However, in the case where the prediction mode of the current block is the vertical mode (for example, the prediction mode having · the mode value of 0), since the encoder and decoder perform intra prediction in the current block using the reference pixels above, the correlation between the prediction pixels positioned in the region on the left in the prediction block and the reference pixels on the left, can become small. However, even in this case, the encoder and decoder can determine that the prediction pixels included in the vertical prediction pixel line on the left are in the filtering region. However, a type of filter applied to vertical mode can be different from a type of filter applied to vertical right mode, In the case where the prediction mode of the current block with the vertical right mode, the encoder and decoder can apply a filter of 2 diagonal taps 1640 of [1/4, 3/4] to each of the prediction pixels (for example, (0.1), (0.2), (0.3), (0.4), (0.5). (0.6), and (0.7)) included .in the pixel line of vertical prediction on the left. Here, by assuming that a position of the target filtration pixel is (0, y), the encoder and decoder can perform filtration on the target filtration pixel based on the target filtration pixel (0, y) and a reference pixel. (-l, y * l) 25 adjacent to a lower part of a reference pixel adjacent to the left of the target filtration pixel. In this case, a filter coefficient 68 / 1Ü4 applied to the filtration target pixel can be 3/4, and a filter coefficient applied to the reference pixel adjacent to the bottom of the reference pixel adjacent to the left of the filtration target pixel can be 1/4. s 1650 of Figure 1613 shows an example of a method of determining the type of filter in the case where the prediction mode of the current block is the horizontal mode below (for example, the prediction mode having the mode value of 8, 9 , 16, 17, 30, 31, 32, and 33). In 1650 of Figure 16B, 1655 indicates a prediction block. and 1660 is a filter tap 10 applied to a filtering target pixel. As described above, in the case where the current block prediction mode is the horizontal mode below, since the encoder and decoder perform an prediction on the current block using the left reference pixels and / or the dc pixels reference below to the left 15, a correlation between the prediction pixels positioned in the upper region in the prediction block 1655 and the reference pixels above, may become small. However, in this case, the encoder and decoder can determine that the prediction pixels (for example, (0.0), (1.0), (2.0), (3.0), (4.0) , (5,0), (6,0), and (7,0)) included in a 2v upper horizontal prediction pixel line (for example, a vertical pixel line positioned at the top of the 1655 prediction block) ) are the region where filtration is performed. However, in the case where the prediction mode of the current block is the horizontal mode (for example, the prediction mode having the value of 2S mode of 1), once the encoder and decoder perform, the intrinsic prediction in the block current using the reference pixels on the left. 69/164 a correlation between the prediction pixels positioned in the upper region in the prediction block 1655 and the reference pixels above, may become small. However, even in this case, the encoder and decoder can determine that the prediction pixels included in the upper horizontal prediction pixel line are in the filtering region. However, a type of filter applied to horizontal mode may be different from a type of filter applied to horizontal mode below, In the case where the prediction mode of the current block is the horizontal mode below, the encoder and decoder can apply a 2-tap 1660 filter of [1/4, 3/41 to each of the prediction pixels (for example, ( 0.0) (1.0), (2.0), (3.0), (4.0), (5.0), (6.0), and (7.0)) included in the Pixel Line horizontal top prediction. Here, by assuming that a position of the target filtration pixel c (x, 0), the encoder and decoder can perform filtration on the target filtration pixel based on the target filtration pixel (x, 0) and a pixel of (x + 1., - 1) adjacent to the right of a reference pixel adjacent to an upper part of the target filtration pixel. In this case, a filter coefficient applied to the target filtration pixel can be 3/4, and a filter coefficient applied to the reference pixel adjacent to the right of the reference pixel adjacent to the top of the target filtration pixel can be 1/4 2 1.670 of Figure 16B shows an example of a method of adaptively determining a type of filter (for example, a shape of the filter, a filter coefficient, a filter tap, or the like) according to the mode of intra prediction (particularly, the 70/104 directional prediction mode) of the current block. In 1,670 of Figure 16B, 1675 indicates a prediction block. and 1680 is a filter tap applied to a target filtration pixel. As in the examples of .1630 and .1650 described above, the encoder · 5 or the decoder can apply a predetermined fixed filter type to each of the right vertical mode and / or the horizontal mode below. However, the encoder and α decoder can also apply several types of filters except the type of filter mentioned above according to the intra prediction mode. In this case, the encoder and decoder can adaptively determine the type of filter based on the current block's intra prediction mode. As an example, the encoder and decoder can use a 1681 3-tap filter performing filtration based on the target filtration pixel (x, y), a reference pixel (x-t2, y-1), and a pixel of 5 reference (x * 3, y ~ 1). In this case, a filter coefficient applied to the target filtration pixel (χ, ν) can be 12, a filter coefficient applied to the reference pixel (x * 2, yl) can be 3, and a filter coefficient applied to the pixel of reference (x4-3, yl) can be 1, As another example, the encoder and decoder can use a 3 taps filter dc 0 1683. 1685, or 1687 performing a filtration based on the target filtration pixel (x, y), a reference pixel (x * l, y-11, and a reference pixel (x + 2, yl). In this case, a dc filter coefficient applied to the target filtration pixel {x, yl can be 12, a filter coefficient applied to the reference pixel (x + ly-1) can be 1, and a filter coefficient applied to the reference 5-pixel (x-í-2, y- ·!) can be 3 (1,683) In addition, a coefficient of fih.ro applied to the target filtration pixel (x, y) could be 12, a 7 · / 104 filter coefficient applied to the reference pixel (xm.yl) can be 2, and a filter coefficient applied to the reference pixel (χ-ί · 2, ν ·· 1) can be 2 (1685). In addition, a filter coefficient applied to the target filtration pixel (x, y) can be 8, a filter coefficient applied to the reference 5 pixel (χ-Η, ν-Ι) can be 6, and a filter coefficient applied to the reference pixel (x 4 '2.yl) can be 2 (1687). In yet another example, the encoder and decoder can also use a 1,689 tap faucet filter to perform filtration based on the target filtration pixel (x, y) and a reference pixel (χΉ, ν-Ι). In this case, a filter-aware filter applied to the target filtration pixel (x, y) can be 8, and a filter coefficient applied to the reference pixel (xtl .y-1) can be 8. However, in the case where the current block's intra prediction mode corresponds to one of the prediction modes (for example, prediction modes having dc mode values 3, 4, 7, 10, 11., 1.4, 15, 18, 19, 20, 15 21, 26, 27, 28, and 29) except for the prediction modes mentioned above, the encoder and decoder can use at least one of the reference pixels above and the reference pixels above right for the intra prediction and use at least one of the reference pixels on the left and the reference pixels below on the left for the intra prediction. fontrctan · to, in this case, since all the prediction pixels positioned in the region on the left and in the upper region in the prediction block can maintain the correlation with. the reference pixels, the encoder and the decoder cannot filter in the prediction block. In addition, as described above in the example in Figure 10, since the encoder and decoder can determine whether or not to. filtration is performed based on the information in the color component of the 72 / .1.04Current block, the encoder and decoder can also perform the filtration process described above with reference to Figures 16A and 16B only in the case where the current block is the luminance block. That is, the filtration processes according to the examples mentioned above can be applied only in the case where the current block corresponds to the luminance block and may not be applied in the case where the current block corresponds to the chroma block. Figure 17 is a diagram showing schematically the method of determining the type of filter according to the example of 10 Figures 16A and 16B. 1710 da. Figure 17 shows an example of a type of filter at ease where the prediction mode of the current block is DC mode and / or planar mode. 1710 of Figure 17 shows an example of the same type of filter as the type of filter shown in 1,610 of Figure 16A. As described above with reference to 1610 of Figure 16A, in the case where the prediction mode of the current block is DC mode (for example, the prediction mode having the mode value of 2) and / or the planar mode (for example, example, the prediction mode having the mode value of 34), the encoder and the decoder can apply a 3 tap filter to 20 an upper left prediction pixel (for example, a 1710 pixel in Figure 17) positioned on upper left part in the prediction block. In addition, the encoder and decoder can apply a 2-tap horizontal filter to each of the pixels (for example, a pixel g in. 1710 of Figure 17] except for the top and hot prediction pixel between the swollen prediction pixels in a vertical prediction pixel line on the left. In addition, the encoder and decoder can apply a vertical 2 tap filter to each of the pixels (for example, one pixel and in 1710 in Figure 17) except for the upper prediction pixel left between the prediction pixels included in an upper horizontal prediction pixel line. As an example, this 5 can be represented by the following Equation 1, Kquaçãe 1 F ... g «(f + 3 * g + 2)» 2 F.e - (d + 3 * e + 2) »2 F..c ~ (a * 2 * c * b 4 '2) »2 Where F ... x indicates a pixel of predicted filtered value generated by performing the filtration on a pixel of predicted value of a position 1730 of Figure. 1.7 shows an example of a filter type in the case where the prediction mode of the current block is the vertical right mode 15 (for example, the prediction mode having the mode value of 5, 6, 12, 13, 22. 23, 24, and 251. 1730 of Figure 17 shows an example of the same type of filter as the type of filter shown in 1630 of Figure 16A. As described above with reference to. 1.636 of Figure 16A, in the case where the prediction mode of the current block is the vertical right mode, the encoder and decoder can apply a 2 tap filter to each of the prediction pixels (for example, one pixel, ie one pixel k in 1730 of Figure 17) included in a pixel line of 74/104 left vertical prediction. In vertical right mode, since the prediction direction is a diagonal direction, the encoder and decoder can determine that the shape of the filter is a diagonal shape. As an example, this can be represented by the following Equation 2. Equation 2 F i = (h · ; · 3 * i 21 >> 2 F.k - (j -r 3 * k -t 2) »2 where F, x. indicates, a filtered prediction value pixel generated by filtering on a position prediction value pixel! G X. 1750 of Figure 17 shows an example of a type of home filter where the prediction mode of the current block is the horizontal mode below (for example, the prediction mode having the mode value of 8, 9, 1.6, 17, 30, 31, 32, and 33). 1750 of Figure '1'7 shows an example of K; same filter lipo as the type of filter shown in 1650 da. Figure 16B. As described above with reference to 1650 of Figure 16B. in the case where the prediction mode of the current block is the horizontal low mode, the encoder and decoder can apply a filter of 2 • 20 taps to each of the prediction pixels (for example, a pixel m and a pixel o at 1,750 and Figure 17) included in an upper horizontal prediction pixel line. In the horizontal mode below, since the prediction direction is a diagonal direction, the encoder and decoder can determine that the shape of the filter is a diagonal shape. How 75/104 example, this can be represented by the following Equation 3, Equation 3 F „m ~ (1+ 3 * m + 2)» 2 F..p (n ’3 * o + 2)» 2 Where F. „x indicates a filtered prediction value pixel generated by filtering a position prediction pixel Figure 18 is a diagram showing schematically an example of a type of filter applied in the case where a current block prediction mode is a vertical seedling and / or a horizontal mode In an example to be described below, terms such as a first reference pixel, a second reference pixel, a third reference pixel and the like will be used independently in each of 1810 and 1820 of Figure 18. For example, the first pixel of »Reference used in 18.1.0 of Figure 18 is not the same as the first reference pixel used in 1820 of Figure. 18, c n second and third reference pixels can have independent meanings in 1810 and 1820 of Figure 18, respectively. As described above, the filter determined by the type of filter also cannot be a filter defined by the shape of the filter, the filter tap, the filter coefficient, or the like. For example, the encoder and decoder can also perform an filtration process by adding a deviation value determined by a predetermined process to the pixel values of the reference pixels. In this case, the filtration process can also be combined with the prediction block generation process to be carried out as a single process. That is, the pixels of the filtered prediction value of each 5 of the pixels in the current block can be derived only by the filtration process mentioned above. In this case, the filtration process mentioned above can correspond to a single process including both the prediction pixel generation process and the filtration process for the generated prediction pixels. In this case, the filtration process can also be called a final prediction pixel generation process (and / or a filtered prediction pixel) using a reference pixel. However, in Figure 18, the examples will be described in view of the generation of a prediction pixel. 1810 of Figure 18 shows an example of a method of generating a prediction pixel 15 in the case where the prediction mode of the current block is the vertical mode. As described above, in the case where the current block prediction mode is the vertical mode, the encoder and decoder can generate the prediction block by performing the intra prediction in the current block using the reference pixel above. In this case, since the correlation between the prediction pixel positioned in the left region in the prediction block and the left reference pixel is small, the prediction pixel positioned in the left region in the prediction block can have a big prediction error. However, the encoder and decoder can generate the prediction block as follows for each of the pixels (for example. (0.0), (0.1), (0.2), (0.3), (0.4 ), (0.5). (0.6), and (0.7)) included in a vertical pixel line (hereinafter referred to as a vertical pixel line on the left) positioned in the leftmost part of the current block 1815. Referring to 1810 in Figure 18, the pixels of positions (0.0), 5 (0.1), (0.2), (0.3), (0.4), (0.5), (0.6), and (0.7) can be present in the vertical pixel line on the left. In 1810 of Figure 18, it is assumed that a target pixel of current prediction is one pixel (0.4) between the pixels in the vertical pixel line on the left. A voice that the prediction mode of the current block 1815 is the vertical one, the encoder and decoder can fill a prediction howl pixel position with a pixel value of a first reference pixel (Ο, -1) ( for example, a reference pixel positioned at the left most between the reference pixels above) positioned on the same vertical line as the vertical line on which the predicted target pixel 15 is positioned between the above reference pixels. That is, in the case where the prediction mode of the current block 1815 is the vertical mode, the pixel value of the first reference pixel can be determined to be a prediction value pixel of the prediction target pixel. However, in this case, once the generated prediction value pixel 20 can; have one. large prediction error, the encoder and dcoder can add a deviation value to the pixel value of the first reference pixel to derive a pixel from the final prediction value. Here, a process for adding the deviation value may correspond to the filtration process or correspond to a part of the prediction pixel generation process. In this case, the deviation value can be derived based on a second reference pixel (-1.4) adjacent to the prediction target pixel and a third reference pixel (-1, -1) adjacent to the left of the first pixel of reference. For example, the deviation value may correspond to a value obtained by subtracting a pixel value from the third reference pixel from a pixel value from the second reference pixel. That is, the encoder and decoder can add the pixel values of the second and third reference pixels to the pixel value of the first reference pixel to derive a prediction value from the prediction target pixel. generation of prediction pixel mentioned above can be similarly applied to pixels except a 10 pixel (¢), 4) between the pixels in the vertical pixel line on the left. The prediction pixel generation processes mentioned above can be represented by the following Equation 4. Equation 4 p Ί x, y 1 :::: pjx, - ί l -t ((p 1, y] - p (-1, -1] j>> 1) '), (x ~ Q ;; y0 ,, n 8 -1} where p '| x, y) indicates a pixel of final prediction value for a target pixel of prediction of a position (x, yi, ep [x, -ll indicates a first positioned reference pixel on the same vertical line as the vertical line in which the target pixel of the prediction is positioned in the reference pixels above In addition, p [-1, yj indicates a second pixel of reference adjacent to the left of the target pixel of prediction, and pj -1, -. lj indicates a third reference pixel adjacent to the left of the first reference pixel, in addition, nS indicates a height of a current block. However, in the case that the current block prediction mode 79 / W4 1815 is the vertical mode, a region to which deviation and / or filtration is applied is not limited to the examples mentioned above. For example, the encoder and decoder can also apply the prediction pixel generation processes mentioned above to two vertical pixel lines positioned at the leftmost part of the current block 1815. In this case, the prediction pixel generation process can be represented by the following Equation 5. Equation 5 p '[x, yl · = p (x, y] + (p [-l, yj - p [-l, -l] + (! << x |) »(x + 1), (x- Ό..Ι, y ::: O..7j where p '| x, y] indicates a pixel of final prediction value for a target pixel of prediction of a position (x, y), and plx.yl indicates a prediction value pixel generated by a general vertical prediction process, in addition, pM, y] indicates a reference pixel positioned on the same horizontal line as the horizontal line on which the target prediction pixel is positioned between the reference pixels the left, ep | -1, -1] indicates a corner reference pixel above the left. However, the deviation addition process described above can be applied only in the case where the current block is the luminance block and may not be applied in the case where the current block is the chroma block. For example, in the case cm. that the current block is the chroma block, the encoder and decoder can also directly determine that the first reference pixel is the prediction value pixel of the prediction target pixel without applying the deviation value. 80/104 .1820 of Figure .18 shows an example of a method of generating a prediction pixel in the case where the prediction mode of the current block is the horizontal mode, As described above, in the case where the prediction mode of the current block is the horizontal mode, the encoder and decoder can generate the prediction block by performing the intra prediction in the current block using the reference pixel on the left. In this case, since the correlation between the prediction pixel positioned in the upper region in the prediction block and the reference pixel above, is small, the 1.0 prediction pixel positioned in the upper region in the prediction block can have a large error of prediction. However, the encoder and decoder can generate the prediction block or prediction pixels as follows for each of the pixels (for example. (0.0), (1.0), (2.0), (3, 0), (4.0), (5.0), (6.0), and (7.0) 1 included 15 in a horizontal pixel line (hereinafter referred to as an upper horizontal pixel line) positioned at uppermost part in the current block 1825. Referring to 1820 of Figure 18, the pixels of positions (0.0), (1.0), (2.0), (3.0), (4.0), (5.0), ( 5.0). c (7.0) may be present in the line> 1 horizontal top pixel. In 1820 of Figure 18. a current predicted target pixel is presumed to be a pixel (4, (, 4 between pixels in the upper horizontal pixel line). Since the prediction mode of the current block 1825 is the horizontal mode, the encoder and decoder can fill a prediction target pixel position with a pixel value of a first reference pixel (-.1., 0 ) (for example, a reference pixel positioned at the top most between the reference pixels on the left) positioned on the same horizontal line as the horizontal line on which the target prediction pixel is positioned between the reference pixels on the left. That is, in the case where the prediction mode of the current block 1825 is the horizontal mode, the pixel value of the first reference pixel can be determined to be a prediction value pixel of the prediction target pixel. However, in this case, since the generated prediction value pixel can have a large prediction error, the encoder and decoder can add a deviation value to the pixel value of the first refer pixel sock to drift a pixel of v final prediction value. Here, a prot * .t addition process ) deviation value can match the i process filtration or correspond to a part of the process generation of prediction pixel. In this case, the deviation value can be derived based on a second reference pixel (4, -1) adjacent to an upper part of the prediction target pixel and a third reference pixel (-1., 1) adjacent to a top of the first reference pixel. For example, the offset value may correspond to a value obtained by subtracting a pixel value from the third reference pixel from a pixel value from the second reference pixel. That is, the encoder and decoder can add the pixel values of the second and third reference pixels to the pixel value of the first reference pixel to derive a prediction value from the prediction target pixel. The prediction pixel generation processes mentioned above can be similarly applied to the pixels except one pixel (4.0) between the pixels in the upper horizontal pixel line. The prediction pixel generation processes mentioned above can be represented by the following Equation 6. Equation 6 p '[x, y | ::: p | -.l., y | -: - ((p | X, -1 j-pp-1, -d]) >>!)), {xO.mS-1, y ™ () ; where pjx, yi indicates a pixel of final prediction value for a target pixel of prediction of a position (x, y), and ρΓ-Ι, ν! indicates a first reference pixel positioned on the same horizontal line as the horizontal line on which the target prediction pixel is positioned between the reference pixels on the left. In addition, ρίχ, -Ί. I indicates a second reference pixel adjacent to an upper part of the prediction target pixel, and pj-1, -11 indicates a third reference pixel adjacent to an upper part of the first pixel of reign. In addition, nS indicates a width of a current block. However, in the case where the prediction mode of the current block ls 1825 is the horizontal mode, a region to which deviation and / or filtration is applied is not limited to the examples mentioned above. For example, the encoder and the decoder can also apply the pixel generation, prediction methods mentioned above to two horizontal pixel lines positioned at the top of the current block 202525. In this case, the pixel generation process of prediction can be represented by the following Equation 7. Equation 7 plx, y] ~ p | x, yj + (p [x ; -1] - p | l t -1] 4 »(y * ig {xH)„ 7, füMI ondc ρΊχ, ν! Indicates a pixel of final prediction value for a target pixel of prediction of a position (x, y), c pfx, y | indicates a pixel of prediction value generated by a horizontal horizontal prediction process, in addition, p [x, ~ l | indicates a reference pixel positioned on the same vertical line as the vertical line on which the target prediction pixel is positioned between the reference pixels above, ep | -l, ··!] indicates a reference pixel from the top corner to the left. However, similar to 1810 in Figure 18, the deviation addition process described above can be applied only in the case where the U block; current is the luminance block and cannot be applied in the case where the current block is the chroma block. For example, in the case where the current block is the chroma block, the encoder and decoder can also directly determine that the first reference pixel is the prediction value pixel of the prediction target pixel without increasing the value. deviation value. Figure 19 is a diagram showing schematically another example of a type of filter according to the exemplary embodiment of the present invention. In the example of Figure 19, once the encoder and the dcodifier ·· 20 cador perform an intra prediction in the current block using the left and / or reference pixels below the left, the correlation between the prediction pixels positioned in the upper region in the prediction block 1910 and in the reference pixels above, it can become small. However, in this case, the encoder and decoder 25 can perform filtering on the prediction pixels included in an upper horizontal prediction pixel line (for example, a line Horizontal Wim positioned in the uppermost part of the prediction block 191 Op Although the case in which a filtration is performed on the pixels in the upper horizontal prediction pixel line, is described in an example to be described below, a filtration method according to Figure 1.9 can be similarly applied to the case in which filtration is performed in pixels on a vertical prediction pixel line on the left (for example, a vertical pixel line positioned on the left most part of the 1910 prediction block), Referring to Figure 19, the encoder and decoder can perform filtration in a predicted pixel, i.e., a prediction pixel B 1920, in the prediction block 1910. The process of performing the filtration mentioned above could correspond to a process adding an appropriate deviation value to a pixel value of the prediction pixel · 1920. The deviation value can be derived based on the reference pixel. As an example, in the case where a 1920 pixel target pixel is a pixel positioned at the top of the prediction block 1910, the reference pixel used to derive the deviation value may be a reference pixel A 1930 adjacent to a part the top of the target pixel 20 of filtration 1920. As another example, in the case where the target pixel of filtration is a pixel positioned in the leftmost part of the prediction block 1920, the reference pixel used to derive the value of the deviation can be a reference pixel adjacent to the left of the target filtration pixel. Here below, an example of a process for obtaining a deviation value based on the reference pixel 1920 will be described. The encoder and decoder can perform intra, prediction on the reference pixel 1930 to obtain a prediction value of the reference pixel, i.e., a reference pixel value, of the prediction. Here, intra prediction can be directional prediction. In this case, the encoder and decoder can perform a prediction at the reference pixel 1940 5 based on the same intra prediction mode (and / or prediction direction) 1950 as a prediction mode (and / or a prediction direction) 194-0 dc current block. In the case where the position of the reference prediction pixels determined based on the prediction directions and the reference pixels of the intra prediction mode are not the positions of the entire number 10, the encoder and decoder can perform the interpellation based on the reference pixels of integer positions to obtain the pixel values of the prediction reference pixels. The encoder and decoder can derive from the offset value based on a difference between the pixel value of the reference pixel 15 and the pixel value of the prediction reference pixel. For example, the deviation value can correspond to a value obtained by dividing the difference between the reference pixel pixel value and the reference pixel pixel value by prediction 4. After the deviation value is derived, the The encoder and decoder can derive a 20 pixel value from a filtered prediction pixel by adding the derived deviation value to the pixel value of the 1920 prediction pixel. The filtration process mentioned above can be represented by the following Equation 8 < Equation 8 Rcfl :: r Prediction value of A 86/104 Della - (ARefl + 2) »2 B '- B r Delta ancle B indicates the pixel value of the prediction pixel 1920, A. indicates the pixel value of the referral pixel 1930 for the prediction pixel, and 0 Refl indicates the pixel value for the referral pixel for A. In addition, B 'indicates the pixel value of the filtered prediction pixel. Although the process of determining whether or not filtration is carried out, the process of determining a region for performing filtration, the process of determining a type of filter, and the like, has been independently described, respectively, in the examples mentioned. above, the encoder and decoder can also combine these processes with one another to process them as a single process. In this case, the encoder and the decoder can determine at least two of the determination processes whether or not filtration is carried out, the process of determining the region for carrying out the filtration, and the process of determining a type of filter based on a single table. As an example, whether or not filtration is performed, the region of filtration performance, and the type of filter according to the intra 20 prediction mode can be represented by a single table. In this case, both the encoder and the decoder can have the same table stored here and determine whether or not filtration is performed, the region where the filtration is performed, and the type of filter based on the intra prediction mode and the table stored here . The following Table 7 shows an example of the table showing whether or not filtration is carried out, the region of filtration, and the type of filter according to the intra prediction mode. Table 7 Infra prediction mode 0 1 2 3 4 5 5 7 8 9 10 11 .12 13 1.4 151 6 17 Type dc filter 0 0 1 0 0 2......... 2........ 0 3 3 0 0 3 3 0 0 3 3 In Table 7, in the case where a value allocated to a filter type is 0, the filter type can indicate that the filtration is not performed in the prediction block. In addition, in the case where the value allocated to the type of fik.ro is 1.2, or 3, the type of filter may indicate that the filtration is performed 10 in the prediction block. In addition, in Table 7, in the case where the value allocated to the filter type c 1, the type of filter may indicate that the region where the filtration is carried out and the type of filter in DC mode and / or in planar mode described above with reference to 1610 of Figure 1.6A, are applied. In addition, in the case where the value allocated to the type of filter is 2, the type of filter may indicate that the region where the filtration is performed and the type of filter in the mute 88/104 vertical direction described above with reference to 1630 of Figure I6A, are applied. In addition, in the case where the value allocated to the filter type is 3. the type of filter may indicate that the region for carrying out the filtration and the type of filter in the horizontal mode below, described above, with reference 5 to 1650 of Figure 16B, are applied. As another example, a. The table shown in Table 7 above can also include information on whether or not a filter is applied according to the size of the block. That is, the table including information on whether or not the filter is applied, the region of application of the filter, and the type of filter according to the intra prediction mode may also include information on whether or not the filter is applied. according to the block size. In this case, both the encoder and the decoder can have the same table stored here and determine whether or not filtration is performed, a. filtration realization region, c type of filter based on the infra prediction mode, current block (and / or prediction block) sizes, and the table stored here. In the ease where the sizes of the current block and / or the prediction block are excessively small or large, it may be preferable that filtration is not carried out in the prediction block. For example, 20 in the case where the current block and / or the prediction block corresponds to a large block such as one. block having a size of 32x32, the correlation between the pixels next to the current block and / or pixels in the current block, can be large. In this case, the filtration in the prediction block does not have an important meaning. However, encoder 25 and decoder can adaptively determine whether or not filtration is performed. according to the sizes of the current block and / or 89/104 of the prediction block to improve filtration efficiency. The following Table 8 shows an example of a table configured considering the block size as well as the intra prediction mode as described above. Table 8 Intra prediction mode 0 1 2 3 4 5 6 7 8 0 10 1 I 12 13 14 15 16 172x2 0 0 0.............. O 0 0 0 (} 0 0 0 0 (} 0 0 0 0 04x4 0 0 4444-1 444 '/'0 0 2 2 0 3 3 0 □ 3 3 0 0 3 3 Size 8x8 0 0 I 0 0 2 2 0 3 3 () 0 3 3 0 0 3 3 of the block 1 (5x16 0 0 1 0 0 2 2 0 3 j | i 0 0 3 3 0 0 3 332x32 0 0 0 0 0 G 0 0 0 0 0 0 O 0 0 0 0 064x64 0 0 (1 0 0 0 0 0 0 0 0 0 0 0 0 () 0 0 Intra Prndxtion Measure18 u 20 ............2 : 22 23 24 25 36 27 S3 29 30 31 32 33 34 1 ft / ia-v.h <s of blcco 2x3 0: 0 0 O 0 0 Ü 0 0 0 00 00 0 03 03 03 01 4x4 0 0 0 0 3 2 2 2 0 0 3 8x8 0 0 0 G 2 2 2 .. 2 Ç 0 0 0 3 3 3 3 / magnet | IGxW 0 0 Ü 0 2 2 2 2 0 0 0 0 3 3 3 3 1. 33x32 0 P 0 and 0 0 6 0 0 0 0 P 0 0 0 0 0 64x64 0 0 0 and 0 O 0 0 O 0 0 0 0 0 0 0 0 In Table 8, the values of 0, 1.2, and 3 allocated to a filter type can have the same meaning as that of Table 7. Referring to Table 8, the encoder and decoder can determine whether or not filtration is performed based on the sizes of the current block and / or the prediction block and determine whether or not filtration is performed, the region of filtration performance, the type of filter, and the like, based on the intra prediction mode. As another example, whether or not filtration is performed, the region of filtration, and the type of filter according to the intra prediction mode can also be represented by the following Table 9. Table 9 Intra prediction mode 0 wiI3 4 s 6 7 8 9 10 11 12 13 14 15 Filtering region TI LI 12 T2 Tltl TI LI. 1.1 LI L4 YOU Tl T4 1.1 0 LI LI Tl Filter type The B B The The ç 0 d ç ç α çHi 0 and and ç 1Q Prediction mode ............16 1: 719 20 21 22 23 24 2627 23 | 29 .......... s r ............. 30 | 31 32 33Regulation of re-establishment of Q 71 fill L, l 1.1 0 0 LI LI Ll. LI Tt0 | Tl YOU Tl. Tl. dltraction // / Tl | ··. ....... I Figure 20 is a diagram describing an intra prediction mode and a type of filter applied to Table 9. 20.10 of Figure 20 shows examples of prediction directions for an intra prediction mode and mode value allocated to each of the directions of prediction. Although the examples mentioned above have been described based on the intra prediction mode (prediction direction, mode value) shown in 410 of Figure 4A, an intra prediction mode (10 prediction direction, mode value) is assumed shown in 2010 in Figure 20 is used in an example in Table 9. However, the example in Table 9 is not limited to. be applied to 2010 in Figure 20. Referring to Table 9, in the case where a value allocated to a filtering region is 0 and / or in the case where a value 15 allocated to a filter type is 0, the encoder and decoder cannot perform filtration in the prediction block. On the other hand, in the case where the value allocated to the filtering region is not 0 and the value allocated to the filter type is not 0, the encoder and decoder can make one. filtration in a prediction block. However, Tx applied to the filter application region can indicate x horizontal pixel lines positioned at the top of the prediction block, that is, horizontal prediction pixel lines, and Lx 02/104 allocated to the filter application region can indicate x vertical pixel lines positioned to the left of the prediction block, that is, vertical prediction pixel lines to the left. In addition, TxLx allocated to the filter application region can indicate a region including 5 both the upper horizontal prediction pixel lines and the left vertical prediction pixel lines. In the example in Table 9, a value of x can be 1, 2, or 4. However, as another example, x can also be a predetermined fixed value. For example, x can always be 1. In this case, the top horizontal prediction pixel line 10 can include only one horizontal pixel line, and the vertical prediction pixel line on the left can include only one vertical pixel line. As a type of fill.ro that is not 0 in Table 9, there may be a, b, c, d : ee ,. In Fabela 9. in the case where a value allocated to the type of filter is 5, the encoder and the decoder can perform the filtration based on the region where the filter is carried out and the type of filter described above with reference to 1610 of the Figure I6A, In this case, the encoder and decoder can perform filtering on the prediction pixels included in the top horizontal prediction pixel line (a 20 pixel line) and the left vertical prediction pixel line (a pixel line) based on the filter coefficient described above with reference to 1610 of Figure 16A. In Table 9, in the case where the value allocated to the filter type is b, the encoder and decoder can perform a filtration based on the region of the filtration and the type of filter 25 described above with reference to Figure 18. In the case where the prediction mode of the current block is vertical mode (for example, the prediction mode having the mode value of 1), the §3 / 104 encoder and decoder can perform filtering on the included prediction pixels on the line, from the vertical prediction pixel on the left (for example, two pixel nails) as shown in 1810 of Figure 18. Furthermore, in the case where the prediction mode of the current block is the vertical mode (for example, the prediction mode having the value of the mode of 2), the encoder and the decoder can perform a comparison on the prediction pixels included in the upper horizontal prediction pixel line (for example, two pixel lines) as shown in Figure 1820 of the Figure 18, However, in Table 9, in the case where the value allocated to c · type 10 dc filter è c e Tx is applied to. application region of the filter, an encoder and the decoder can perform filtration based on the filtration region and the type of filter described above with reference &. 1650 of Figure 16B. In this case, the encoder or decoder can apply a diagonal filter of [1.3] to the prediction pixels included in the top horizontal prediction pixel line. In addition, in Table 9, in the case where the value allocated to the type of filter is bc Lx c applied to the filter application region, the encoder and dcoder can perform filtration based on the filtration region and the filter type described above with reference to 1630 of Figure 16A, 20 In this case, the encoder and dcoder can apply a diagonal filter dc i 1.3] to the prediction pixels included in the vertical prediction pixel line on the left. In Table 9, when the dc intra prediction mode of the current block is 7 or 10, the value allocated to the filter type can be d. Referring to ^ 5 2020 in Figure '20, a block 2023 can indicate a block of prediction, and a direction of prediction when an intra prediction mode of a current block can be represented by 2025. In this case, a pixel of value filtered prediction can be represented by the following Equation 9. Equation 9 p '| x, y] phylo-kpplx, y] * k * p [x, -H + 8) »4. k - 1 << (3-y), {x ::: () .. 7, y ::: () .. 3} where p '> x, y] can indicate a filtered prediction value pixel , and pl> qy] can indicate a pixel of prediction value of a position (x, y) before filtering. In addition, plxrlj can indicate a reference pixel positioned on the same vertical line as the vertical line on which the prediction pixel is positioned between the above reference pixels. Referring to Equation 9, when the current block prediction mode is '10, the encoder and decoder can perform filtering on four horizontal pixel lines positioned in the upper 1S position on the 2023 prediction block. Even in the case where the current block prediction mode is 7, the encoder and decoder can filter four vertical pixel lines positioned in the leftmost part of the prediction block 2023 by a method similar to the method represented by Equation 9. Again referring to 2020 in Figure 20, when the current block prediction mode is 24, the prediction direction can be represented as shown in 2027. In Table 9, when the current block prediction mode is 24, the value allocated to the filter type can be e. In the case where the current block 25 intra prediction mode is 24, the filtered prediction value pixel can be represented by the 95/1 following Equation 10. Equation 10 p '(x, y | · «p (x, y) * (pF-l, y] - Rppl, yl * 2)» 2, {x ::: 0, y ::: 0..7 | where p '| x, y] can indicate a filtered prediction value pixel, and 5 p (x, yl could indicate a prediction value pixel of a position (x, y) before filtering. In addition, p ) - 1, y] can indicate a reference pixel positioned on the same horizontal line as the horizontal line on which the prediction pixel is positioned between the reference pixels on the left. Rp | -l, y] can indicate a value of pixel prediction of 0 reference of p [-1, y |, that is, a reference pixel value of the prediction. The encoder and decoder can perform prediction on the reference pixel of p | -I, y | based on the same intra prediction mode as the current block prediction mode to derive the prediction reference pixel value, In Table 9, even in the case where the current block intra prediction mode is 13, 17, 23, 31, cu 32, the value allocated to the filter type can be c. However, even in this case, the encoder and decoder may perform a filtration by a method similar to the method described in Equation 1 (1, In Table 9, the filter applied according to the value allocated to each type of filter is not limited to the examples mentioned above. That is, the filter applied according to the amount allocated to anything of filt.ru can be changed according to an implementation and / or when necessary. In addition, whether or not the filter is applied can be 94 / 10-4 is intended to be different from the adjustment in the examples mentioned above. Hereinafter, an example of a process for performing filtration on a prediction pixel according to the exemplary embodiment of the present invention will be described in detail. In an example to be described below, an entry is IntraPredMode, nS, ρΙ'χ, γΚχ, γ »- · -! .. nS), and predSamples | x, yi (x, y TO O..nS-1) , and one output is predSamplesF | X, y | (x, y and R O., nS-l). On here. IníraPredMode indicates a prediction mode of a current block, nS indicates a horizontal size and a vertical size of a prediction block, ep (x, y | (x, y - l ,, nS) indicates a pixel value of a ρ, ίχοΙ dc reference positioned around the current block, in addition, predSamplcs | x, yUx, y ~ O..nS-Ij indicates a pixel of prediction value, and predSa.mplesI | x, y | (x , y K! O..n, S-1) indicates a filtered prediction value pixel. Here, whether or not filtration is performed, the region of filtration performance, and the type of filter according to the intra prediction mode can be determined by the following ’Table 10, Table 10 97 /) 04 In Table 10, Post Intra Filter Type indicates information about a type of filter applied to a prediction block. Here, the information about a type of filter can also include all the information about whether or not Filtration is performed, the region where the filtration is performed, and the type of filter according to the intra prediction mode. .In addition, intraPostFilterType can be represented by intraPostFUicrTypei'IniraPredModei. which means that a value allocated to intra PostFíl · · terType is determined by Intra PredMode. w In the case where nS is less than 32, the encoder and decoder can induce predSaniplesF | x, yHx, y ss O..nS-1) by the following process according to the value allocated to intraPostFiiterType [IntraPredMode]. If the value allocated to intraPostF'ilter'iype.JlntraPredMode] is 1, the encoder and decoder may be derived from a predSamplesF [x, y] value by the following Equation 11. Equation 11 predSamplesF [0.0 | ™ (ps-1.01 - * · 2 * predSamples (0.01 * pi0 ( ~ l] · * 2) 98/104 predSamplesF | x, Oj «(ρ {χ, ·· 1] + 3 '* predSarnples | x, O | + 2)» 2 (xl..nS-U predSamplesF | O, y] ~ (pi- hyl + 3 * predSarnplesiO, y | · ; · 2) »2 predSarnplesFix, y | ~ predSamples [x, yI (x ^ k.nS ·· 1) If the value allocated to intraPostFilt.crType [hit.raPredMode] is 2, the encoder and decoder can be derived from the predSampiesFfx.yl value by the following Equation 12. Equation 12 prcdSamplesFÍO.yl = »(ρ | -1.γ * 1] * S ^ redSarnplesKxy] + 2)» 2 (y-O. <NS ~ l) predSamplesF [x, y | «PredSampIesbqy] (x ~ l..nS-1, y« * - 0..nS-l) If the amount allocated to intraPoslFíIterTypenmraPredMode | .for 3, the encoder and decoder may be derived from the predSam ·· pler> F | x, y | by the following Equation 13. Equation 13 predSamplesF | x, O] ~ (p® 1, -11 · * 3 ' A ' predSarsplcsix, 0 | r 21 »2 ®0, .nS 1) predSarnplesF | x, yi ::: predSampies [x, y | (x K 0..nS ·· 1, y ~ l „nS-l) 99/104 If the value allocated to intraPostFilterd'ypeflntraPred Model is 0, the encoder and decoder can be derived from the predSamplesF) x, yi value by the following Equation 14. Equation 14 predSamplesFix, y] ~ predSarnples) x, y | (x, y = 0..nS-1) However, the encoder and decoder may differently establish an application of all the methods (for example, the method of performing filtration) described above according to the size and / or depth of the current block (and / or the prediction block) ). For example, the application of the present invention can be differently established according to the size of the PU and / or the size of the TU or be differently established according to the depth value of the CU. In this case, the encoder and decoder can use the block size value and / or a block depth value as a variable to determine the application of the present invention. Here, the block can correspond to CU, PU, and / or TU. As an example, in the case where the value of the block size is used as a variable, the encoder and decoder can apply the present invention only to a block having a size equal to or greater than the variable. As another example, the encoder and decoder can apply the present invention only to a block having a size less than or equal to the variable. Alternatively, the encoder and decoder can apply the present invention only to a block having a size corresponding to the value of the variable. = 00 / i04 The following Table 11. shows an example of the application of the present invention in the case where the value of the block size used as a variable for determining the application of the present invention is 16x16. In Table 11. 0 indicates that the present invention is applied to a corresponding block size, and X indicates that the present invention is not applied to a corresponding block size. Table 11 Block size Method A Method B Method C 32x32 O X X 16x16 O 0 O 8x8 X O X 4x4 3X3 / O X o Referring to Table 11, in the case of method A, the encoder and the deeodifier can apply the present invention only to a block having a size equal to or greater than a block size (16x16) used as a variable. In the case of method B, the encoder and decoder can apply the present invention only to a block having a size less than and equal to the size of the block (16x16) used as the variable. In addition, in the case of a In method C, the encoder and decoder can apply the present invention only to a block having a size equal to the block size (16x16) used as the variable. WI./W4 However, as an example, the value of the variable (the value of the block size and / or a value of the depth of the block} to determine the application of the present invention can be a predetermined fixed value. In this case, the value of the variable can be pre - stored in encoder 5 and the decoder, and the encoder and decoder can determine the application of the present invention based on the value of the variable stored here. As another example, the value of the variable for determining the application of the present invention can also be changed according to a profile or a level. In the case where the value of the variable is determined based on the profile, the value of the variable corresponding to each profile, can be a fixed-predetermined value, and in the case where the value of the variable is determined based on the level), the variable value corresponding to each level can be a fixed predetermined value. L5 As yet another example, the value of the variable (the value of the block size and / or a depth value of the block) to determine the application of the present invention can be determined by the encoder. In this case, the encoder can encode the information into the value of the variable to transmit the encoded information to the decoder through a bit stream. Λ information on the value of the variable transmitted through the bit stream can be included in a sequence parameter set (SPS), an image parameter (PSP), the slice header, and the like. The decoder can derive the value of the variable from the received bit stream and determine the application of the present invention 25 based on the value of the derived variable. As an indicator used to display information on the value of 102/164 variable, there may be several types of indications. As an example, in the case where method A is used in Table 1.1 and the value of the variable for determining the application of the presence invention corresponds to the value of the block size, the indicator used to indicate the information 5 about the value of the variable can to be Iog2.jntra..predition ... filt.ering ... enable ... mox ... size. ..min us2. For example, in the case where the value of the variable is 32x32, a value allocated to the indicator can be 3, and in the case where the value of the variable is 4x4, a value allocated to the indicator can be 0, As another example, 10 in In the event that method A is used in Table 11 and the value of the variable for determining the application of the present invention corresponds to the depth value of CU, the indicator used can indicate the information in the value of the variable can be intra..predict. ..fdtering..enable „.max ... cu ... depth. In this case, for example, when a value allocated to the indicator is 0, the present invention can be applied to a block having a size equal to or greater than 64x64, when a value allocated to the indicator · is 1, the present invention can be applied to a block having a size equal to or greater than 32x32, and when a value allocated to the • 20 indicator is the present invention it can be applied to a block having a size equal to or greater than 4x4. However, the encoder can also determine that the present invention is not applied to all block sizes. In this case, the encoder can use a predetermined indication to transmit. determined information for the decoder. As an example, the encoder can allow an indicator such as intra. „Predtion_filtering„ enable „.fiag to be included in SPS, PPS, sfiee header, or similar and then transmit the SPS, PPS, slice header or similar, for the decoder. Here, intra .... predition ... fi.iterlng..enable „, flag can correspond to an indicator indicating whether or not the present invention is applied to all blocks in a sequence, an image, and / or a portion . As another example, the encoder can also transmit the information th indicating that the present invention is not applied to all block sizes using an indicator (for example, in tra ... prediction „, filtering. .Enable.„ Max. .cu ... depth) indicating the information in the value of the variable described above. In this case, As an example, the encoder allocates a value (for example, 5) indicating an invalid (and / or overridden) block size (for example, a size of 2x2) for the indicator, thereby making it possible to indicate that the present invention does not It is applied to all block sizes. According to the examples of the present invention described above, the prediction error generated at the time of intra prediction is reduced 20 and the discontinuity between the blocks is minimized, thereby making it possible to improve the efficiency of the prediction and the efficiency of the coding. In the exemplary embodiments mentioned above, although the methods have been described on the basis of a flow chart as a series of steps or blocks, the present invention is not limited to 104/104 a sequence of steps, however, any step can be generated in a different sequence or simultaneously from it or with other steps as described above., In addition, it can be taken into consideration by those skilled in the art that the steps shown in a;> flowchart are non-exclusive and, however, include other steps and deiit one or more steps of a flowchart without reading an effect on the scope of the present invention. The modalities mentioned above include examples of several aspects. Although all possible combinations showing several 10 aspects are not described, it can be taken into account by those skilled in the art that other combinations can be made. However, the present invention should be considered as including all other substitutions, changes and modifications which pertain to the following Claims.
权利要求:
Claims (17) [1] 1 - Video coding equipment, characterized by the fact that it comprises: a unit of generation of prediction blocks configured to generate a prediction block performing intra prediction and a current block for a DC mode and to perform filtration in target filtration pixels in the prediction block; wherein the filtration target pixel comprises at least one pixel that is located on a vertical line of pixels in a fate to the left of the prediction block and at least one pixel that is located on a horizontal line of pixels on a side higher than the prediction; and a reconstructed block generation unit configured to generate a reconstructed block based on the prediction block and a reconstructed residual block corresponding to the current block. [2] 2 ~ Video Decoding Equipment, according to Claim 1, characterized in that, when the filtered prediction pixel is located in one. upper left side of the prediction block the filtered prediction pixel is filtered by applying a [3] 3 taps based on a value that is derived by the DC mode, with an upper reference pixel next to an upper part of the filtered prediction pixel and a left reference pixel next to. one left of the filtered prediction pixel, the upper reference pixel and the left reference pixel being reconstructed reference pixels, each one being next to the current block and in the 3 tap filter, 2/4 being a filter coefficient assigned to a filter tap that corresponds to the value that is obtained by the DC mode, .1 / 4 being a filter coefficient assigned to a filter tap that corresponds to the top reference pixel and 1/4 being a filter coefficient assigned to a tap filter that corresponds to the left reference pixel, 3 - Video Decoding Equipment, according to Claim 2, characterized in that the value that is derived by the DC method is an average of reference pixels that have been decoded. [4] 4 - Video Dicodification Equipment, according to Claim 1, characterized in that the target pixels of filtration are filtered, when the current block is a block of luminance component, and the target pixels of filtration are not filtered, when the current block is a chrominance component block. [5] 5 - Video Decoding Equipment, according to Claim 1, characterized in that the target pixels of filtration are filtered, when the current block is smaller than 32 x 32. [6] 6 ~ Video Decoding Equipment according to Claim 1, characterized in that the target filtering pixels are filtered based on a predetermined form of fixed filter, a filter tap and a plurality of filter coefficients, regardless of the 3/9 current block size. [7] 7 * Video Deeodification Equipment according to Claim .1, characterized in that the prediction block is generated using reference pixels that have been decoded and the target pixels of filtration are filtered with reference pixels that are adjacent to the vertical line of pixels and the horizontal line of pixels, [8] 8. Method to Decode Video, characterized by understanding; generate a prediction block by performing intra prediction in a current block for a DC mode and performing filtration on target filtration pixels in the prediction block, where the target filtration pixels include at least one pixel that is. located on a vertical line of pixels on the left most side of the prediction block and at least one pixel that is located on a horizontal line of pixels on the higher side of the prediction block; and generating a reconstructed block based on the prediction block and a reconstructed residual block corresponding to the current block. [9] 9. Video Encoding Equipment, c & metexized pair comprising: prediction block generating unit configured to generate a prediction block by performing intra prediction in a current block for a DC mode and performing the filtration etn pixels 4/9 in the prediction block, where the target filter pixels include at least one pixel that is located on a vertical line of pixels on the far left side of the prediction block and at least one pixel that is located on a horizontal line of pixels to an upper side of the prediction block; and a reconstructed block generating unit configured to generate a reconstructed block based on the prediction block and a reconstructed residual block corresponding to the current block. [10] 10. Method for Encoding Video, characterized by comprising: generate a prediction block by performing intra prediction in a current block for a DC mode and perform filtration on target filtration pixels in the prediction block, where the target filtration pixels include at least one pixel that is located on a vertical line of pixels to the leftmost side of the prediction block and, through the menus, a pixel that is located on a horizontal line of pixels on an upper side of the prediction block; and generate a reconstructed block based on the prediction block and a residual reconstructed block corresponding to the current block. [11] 11. Video Decoding Equipment, characterized by that it includes: an. generating unit of prediction blocks to generate a 5/0 prediction block performing a prediction on a prediction target pixel in a current block based on an intra prediction mode of the current block; and a reconstructed block generating unit to generate a reconstructed block based on the prediction block and a reconstructed residual block corresponding to the current block, in which the generating unit, of prediction blocks performs the. prediction on the prediction target pixel based on a first deviation, when the current block's intra prediction mode is a vertical mode and the open prediction pixel is a pixel on a left vertical line of pixels, and performs the prediction on the target pixel of prediction based on a second deviation, when the current prediction mode of the current block is a horizontal mode and the target pixel of prediction is a pixel on an upper line of horizontal pixels, the left vertical line of pixels being a vertical line of pixels positioned on the leftmost side of the current block, the horizontal line of upper pixels being a horizontal line of pixels positioned on the higher side of the current block. [12] 12. Video Decoding Equipment, according to Claim '11, characterized in that the first deviation is determined based on a difference value of pixel values of two reference pixels and the second deviation is determined based on a value of difference in pixel values of two reference pixels. [13] 13. Video Deeodification Equipment, according to Rei 6/9 vindication 1 1, characterized in that the generating unit of prediction blocks derives a prediction value from the prediction target pixel by adding a value of the first deviation to a pixel value of a first reference pixel present in the same nail, vertical , as a vertical line where the target prediction pixel is present between reconstructed reference pixels adjacent to one. upper part of the current block, when the intra prediction mode of the current block is the vertical mode and the target pixel of prediction is the pixel in the line, vertical left of pixels, the value of the first deviation being determined based on a difference value between a pixel value of a second reference pixel adjacent to a left part of the prediction target pixel and a pixel value of a third reference pixel adjacent to a left part of the first reference pixel. [14] 14. Video Decoding Equipment according to Claim 13, characterized in that the prediction block generating unit determines the pixel value of the first reference pixel as the prediction value of the prediction target pixel, when the current block is a chrominella component block. [15] 15. Video Decoding Equipment according to Claim 11, characterized in that the generating unit, from prediction blocks derives a prediction value from the prediction target pixel by adding a value of the second deviation to a pixel value of a first reference pixel present in the same horizontal line, as a horizontal line in which the target pixel of prediction is present between reconstructed reference pixels adjacent to a left part of the current block, when the intra prediction mute of the current block is the horizontal mode and the prediction target pixel is the pixel in the upper horizontal line of pixels, the value of the second deviation being determined based on a difference value between a pixel value of a second reference pixel adjacent to an upper part of the prediction target pixel. and a pixel value of a third reference pixel adjacent to an upper part of the first reference pixel. [16] 16 <Video Decoding Equipment, according to Claim 11, characterized in that the unit generating the prediction blocks performs the prediction on the target pixel of prediction based on the first deviation and the second deviation, when the current block has a size smaller than 32 x 32. [17] 17. Method for Decoding Video, characterized by comprising: generating a prediction block by performing the prediction on a prediction target pixel in a current block based on an intra prediction mode of the current block; and generate a reconstructed block based on the prediction block and a reconstructed residual block corresponding to the current block, in. that the prediction on the target prediction pixel is performed based on a first deviation, when the current block's intra prediction mode is a vertical mode and the target prediction pixel is a pixel on a left vertical line of pixels, and the prediction on the prediction target pixel is performed based on a second deviation, when the 8/9 current prediction mode of the current block is a horizontal mode and the target prediction pixel is a pixel on a horizontal, upper pixel line, the left vertical line of pixels being a vertical line of pixels positioned on the left most side of the current block and the upper horizontal line of pixels being a horizontal line of pixels positioned on one side above the current block. 1S> Video Encoding Method, characterized by comprising: generating a prediction block by performing the prediction on a prediction target pixel in a current block based on an intra prediction mode of the current block; and generate a reconstructed block based on the prediction block and on a reconstructed residual block corresponding to the current block, in which the prediction on the target prediction pixel is carried out based on a first deviation, when the current block's intra prediction mode is a vertical mode and the prediction target pixel is a pixel in a left vertical line of pixels, and prediction on the target prediction pixel is performed based on a second deviation, when the current prediction mode of the current block is a mode horizontal and the predicted target pixel is a pixel on a horizontal line above pixels, the vertical line of pixels being a vertical line of pixels positioned on the left side of the current block and the U9 horizontal top line of pixels a horizontal line of pixels positioned on one side of the current block.
类似技术:
公开号 | 公开日 | 专利标题 BR112013021229A2|2019-08-13|video encoding and decoding equipment and method TW201309026A|2013-02-16|Filtering blockiness artifacts for video coding
同族专利:
公开号 | 公开日 CA3011863A1|2012-12-27| US20130301720A1|2013-11-14| US20140204998A1|2014-07-24| KR101451924B1|2014-10-23| EP2757791A2|2014-07-23| EP3614668A1|2020-02-26| CA2944541A1|2012-12-27| CN103404151A|2013-11-20| CN103780911B|2019-01-25| KR20140066680A|2014-06-02| US9332262B2|2016-05-03| KR101451920B1|2014-10-23| CA3011847C|2021-10-12| US10979735B2|2021-04-13| JP2014161039A|2014-09-04| EP3217665B1|2019-11-06| JP2017011735A|2017-01-12| US10205964B2|2019-02-12| US20210203986A1|2021-07-01| CA3026266A1|2012-12-27| KR20120140222A|2012-12-28| CA3026271C|2020-03-10| KR101488497B1|2015-01-30| US10516897B2|2019-12-24| US20140205000A1|2014-07-24| KR101357640B1|2014-02-05| US10986368B2|2021-04-20| CA2828462A1|2012-12-27| CA2944541C|2019-01-15| EP2723078A2|2014-04-23| WO2012177053A2|2012-12-27| KR101451919B1|2014-10-23| JP2022031647A|2022-02-22| US9154781B2|2015-10-06| US9225981B2|2015-12-29| KR20130106338A|2013-09-27| CA3011853A1|2012-12-27| KR20140066679A|2014-06-02| JP2020080568A|2020-05-28| US20170134748A1|2017-05-11| EP2723078B1|2017-05-03| US20190379905A1|2019-12-12| CA3011851C|2020-06-30| EP2723078A4|2014-06-18| KR101451918B1|2014-10-23| US10003820B2|2018-06-19| US20190110072A1|2019-04-11| JP2014161037A|2014-09-04| US20190110073A1|2019-04-11| CA2910612A1|2012-12-27| US20210227256A1|2021-07-22| ES2618860T3|2017-06-22| US20160198172A1|2016-07-07| EP3217665A1|2017-09-13| KR101451922B1|2014-10-23| JP2016040934A|2016-03-24| DK2723078T3|2017-08-28| JP2014520476A|2014-08-21| EP2747433A1|2014-06-25| KR20130106337A|2013-09-27| KR20140062454A|2014-05-23| EP2747432A1|2014-06-25| JP5982421B2|2016-08-31| CA3011871A1|2012-12-27| CA3081215A1|2012-12-27| JP5976793B2|2016-08-24| KR20140066678A|2014-06-02| CA3026266C|2020-03-10| JP2014161038A|2014-09-04| JP2019004491A|2019-01-10| US20190379908A1|2019-12-12| CA2828462C|2016-11-22| KR20120140181A|2012-12-28| JP2016040935A|2016-03-24| CN103796030B|2016-08-17| US10979734B2|2021-04-13| KR101451921B1|2014-10-23| US10904569B2|2021-01-26| US20190379906A1|2019-12-12| CA3011853C|2021-01-05| KR101451923B1|2014-10-23| CN103404151B|2015-05-20| KR101357641B1|2014-02-05| CN103796030A|2014-05-14| JP2018129815A|2018-08-16| CN103796029B|2016-05-18| CA2910612C|2018-08-28| EP2757791A3|2014-07-30| CN103780911A|2014-05-07| US10021416B2|2018-07-10| WO2012177053A3|2013-04-04| US20140192873A1|2014-07-10| IN2014CN02639A|2015-08-07| US20190379907A1|2019-12-12| US10536717B2|2020-01-14| KR20130106336A|2013-09-27| EP2757791B1|2016-12-21| CA3011851A1|2012-12-27| US9900618B2|2018-02-20| DK3217665T3|2019-12-16| CA3011863C|2020-06-30| US9591327B2|2017-03-07| BR112014010333A2|2017-10-10| JP6422922B2|2018-11-14| CA3011871C|2021-10-12| KR20120140182A|2012-12-28| CA3026271A1|2012-12-27| US20160198189A1|2016-07-07| CA3011847A1|2012-12-27| US20160198191A1|2016-07-07| KR20140062455A|2014-05-23| KR20140066681A|2014-06-02| JP6666968B2|2020-03-18| CN103796029A|2014-05-14|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 FR2688369B1|1992-03-03|1996-02-09|Thomson Csf|VERY LOW-RATE IMAGE CODING METHOD AND CODING-DECODING DEVICE USING THE SAME.| EP1835761A3|1996-05-28|2007-10-03|Matsushita Electric Industrial Co., Ltd.|Decoding apparatus and method with intra prediction and alternative block scanning| EP0895424B1|1997-07-31|2007-10-31|Victor Company of Japan, Ltd.|digital video signal inter-block predictive encoding/decoding apparatus and method providing high efficiency of encoding.| AU717480B2|1998-08-01|2000-03-30|Korea Advanced Institute Of Science And Technology|Loop-filtering method for image data and apparatus therefor| WO2002067589A1|2001-02-23|2002-08-29|Seiko Epson Corporation|Image processing system, image processing method, and image processing program| CN101448162B|2001-12-17|2013-01-02|微软公司|Method for processing video image| US7386048B2|2002-05-28|2008-06-10|Sharp Laboratories Of America, Inc.|Methods and systems for image intra-prediction mode organization| US7372999B2|2002-09-09|2008-05-13|Ricoh Company, Ltd.|Image coder and image decoder capable of power-saving control in image compression and decompression| US7227901B2|2002-11-21|2007-06-05|Ub Video Inc.|Low-complexity deblocking filter| BR0317966A|2003-01-10|2005-11-29|Thomson Licensing Sa|Defining tween filters to hide errors in an encoded image| US7457362B2|2003-10-24|2008-11-25|Texas Instruments Incorporated|Loop deblock filtering of block coded video in a very long instruction word processor| KR101000926B1|2004-03-11|2010-12-13|삼성전자주식회사|Filter for removing blocking effect and filtering method thereof| KR101204788B1|2004-06-03|2012-11-26|삼성전자주식회사|Method of and apparatus for predictive video data encoding and/or decoding| JP4050754B2|2005-03-23|2008-02-20|株式会社東芝|Video encoder and moving picture signal encoding method| KR101246294B1|2006-03-03|2013-03-21|삼성전자주식회사|Method of and apparatus for video intraprediction encoding/decoding| KR100882949B1|2006-08-17|2009-02-10|한국전자통신연구원|Apparatus and method of encoding and decoding using adaptive scanning of DCT coefficients according to the pixel similarity| KR101312260B1|2007-01-19|2013-09-25|삼성전자주식회사|The method and apparatus for compressing and restoring edge position image effectively| JP5351021B2|2007-06-29|2013-11-27|シャープ株式会社|Image encoding device and image decoding device| CN101409833B|2007-10-12|2012-10-03|昆山杰得微电子有限公司|De-block effect filtering device and method| US8576906B2|2008-01-08|2013-11-05|Telefonaktiebolaget L M Ericsson |Adaptive filtering| JP2009194617A|2008-02-14|2009-08-27|Sony Corp|Image processor, image processing method, program of image processing method and recording medium with program of image processing method recorded thereon| KR101460608B1|2008-03-04|2014-11-14|삼성전자주식회사|Method and apparatus for encoding and decoding image usging filtered prediction block| KR101379187B1|2008-06-23|2014-04-15|에스케이 텔레콤주식회사|Image Encoding/Decoding Method and Apparatus Using Block Transformation| KR101517768B1|2008-07-02|2015-05-06|삼성전자주식회사|Method and apparatus for encoding video and method and apparatus for decoding video| CN101321290B|2008-07-17|2010-12-15|北京数码视讯科技股份有限公司|Block-removing filtering method based on digital signal processor| KR101590500B1|2008-10-23|2016-02-01|에스케이텔레콤 주식회사|/ Video encoding/decoding apparatus Deblocking filter and deblocing filtering method based intra prediction direction and Recording Medium therefor| US8514942B2|2008-12-31|2013-08-20|Entropic Communications, Inc.|Low-resolution video coding content extraction| US20120020580A1|2009-01-29|2012-01-26|Hisao Sasai|Image coding method and image decoding method| JP2010183162A|2009-02-03|2010-08-19|Mitsubishi Electric Corp|Motion picture encoder| KR101379185B1|2009-04-14|2014-03-31|에스케이 텔레콤주식회사|Prediction Mode Selection Method and Apparatus and Video Enoding/Decoding Method and Apparatus Using Same| JP5169978B2|2009-04-24|2013-03-27|ソニー株式会社|Image processing apparatus and method| JP5597968B2|2009-07-01|2014-10-01|ソニー株式会社|Image processing apparatus and method, program, and recording medium| KR101510108B1|2009-08-17|2015-04-10|삼성전자주식회사|Method and apparatus for encoding video, and method and apparatus for decoding video| US9277227B2|2009-10-22|2016-03-01|Thomas Licensing|Methods and apparatus for DC intra prediction mode for video encoding and decoding| CN101710990A|2009-11-10|2010-05-19|华为技术有限公司|Video image encoding and decoding method, device and encoding and decoding system| KR20110054244A|2009-11-17|2011-05-25|삼성전자주식회사|Intra prediction device for depth image coding using median filter and method thereof| KR101623124B1|2009-12-03|2016-05-24|에스케이 텔레콤주식회사|Apparatus and method for encoding video, apparatus and method for decoding video and directional intra-prediction method therefor| CN101783957B|2010-03-12|2012-04-18|清华大学|Method and device for predictive encoding of video| KR101503269B1|2010-04-05|2015-03-17|삼성전자주식회사|Method and apparatus for determining intra prediction mode of image coding unit, and method and apparatus for determining intra predion mode of image decoding unit| US8619857B2|2010-04-09|2013-12-31|Sharp Laboratories Of America, Inc.|Methods and systems for intra prediction| KR20110113561A|2010-04-09|2011-10-17|한국전자통신연구원|Method and apparatus for intra prediction encoding and decoding using adaptive filter| US8644375B2|2010-04-09|2014-02-04|Sharp Laboratories Of America, Inc.|Methods and systems for intra prediction| JP5832519B2|2010-04-13|2015-12-16|サムスン エレクトロニクス カンパニー リミテッド|Video encoding method and apparatus based on encoding unit based on tree structure, and video decoding method and apparatus| WO2011127964A2|2010-04-13|2011-10-20|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|Apparatus for intra predicting a block, apparatus for reconstructing a block of a picture, apparatus for reconstructing a block of a picture by intra prediction| KR101791242B1|2010-04-16|2017-10-30|에스케이텔레콤 주식회사|Video Coding and Decoding Method and Apparatus| KR101885258B1|2010-05-14|2018-08-06|삼성전자주식회사|Method and apparatus for video encoding, and method and apparatus for video decoding| US20110317757A1|2010-06-25|2011-12-29|Qualcomm Incorporated|Intra prediction mode signaling for finer spatial prediction directions| CN106851292A|2010-07-02|2017-06-13|数码士有限公司|For the method for the decoding image of infra-frame prediction| US9661338B2|2010-07-09|2017-05-23|Qualcomm Incorporated|Coding syntax elements for adaptive scans of transform coefficients for video coding| EP3570545B1|2010-07-14|2021-05-19|NTT DoCoMo, Inc.|Low-complexity intra prediction for video coding| KR20120012385A|2010-07-31|2012-02-09|오수미|Intra prediction coding apparatus| KR101373814B1|2010-07-31|2014-03-18|엠앤케이홀딩스 주식회사|Apparatus of generating prediction block| US9716886B2|2010-08-17|2017-07-25|M&K Holdings Inc.|Method for restoring an intra prediction mode| US9008175B2|2010-10-01|2015-04-14|Qualcomm Incorporated|Intra smoothing filter for video coding| KR102277273B1|2010-10-08|2021-07-15|지이 비디오 컴프레션, 엘엘씨|Picture coding supporting block partitioning and block merging| KR20120039388A|2010-10-15|2012-04-25|에스케이하이닉스 주식회사|Method for manufacturing semiconductor device| EP2635030A4|2010-10-26|2016-07-13|Humax Co Ltd|Adaptive intra-prediction encoding and decoding method| KR101527666B1|2010-11-04|2015-06-09|프라운호퍼 게젤샤프트 쭈르 푀르데룽 데어 안겐반텐 포르슝 에. 베.|Picture coding supporting block merging and skip mode| CN107197257B|2010-12-08|2020-09-08|Lg 电子株式会社|Intra prediction method performed by encoding apparatus and decoding apparatus| RU2610294C1|2011-01-12|2017-02-08|Мицубиси Электрик Корпорейшн|Image encoding device, image decoding device, image encoding method and image decoding method| US9420294B2|2011-02-23|2016-08-16|Lg Electronics Inc.|Intra-prediction method using filtering, and apparatus using the method| KR101789478B1|2011-03-06|2017-10-24|엘지전자 주식회사|Intra prediction method of chrominance block using luminance sample, and apparatus using same| WO2012134046A2|2011-04-01|2012-10-04|주식회사 아이벡스피티홀딩스|Method for encoding video| CA3082413C|2011-04-25|2022-01-25|Lg Electronics Inc.|Intra-prediction method, and video encoder and decoder using same| US20120294365A1|2011-05-17|2012-11-22|Dong Zheng|Image and video encoding and decoding| KR101383775B1|2011-05-20|2014-04-14|주식회사 케이티|Method And Apparatus For Intra Prediction| KR20120140181A|2011-06-20|2012-12-28|한국전자통신연구원|Method and apparatus for encoding and decoding using filtering for prediction block boundary| CN104883576B|2011-08-29|2017-11-14|苗太平洋控股有限公司|The method that prediction block is produced with AMVP patterns| MX2014004103A|2012-01-17|2014-09-15|Genip Pte Ltd|Method of applying edge offset.| CN110830798A|2012-01-18|2020-02-21|韩国电子通信研究院|Video decoding device, video encoding device, and computer-readable recording medium| EP2920970A4|2013-04-11|2016-04-20|Mediatek Inc|Method and apparatus for prediction value derivation in intra coding| US9451254B2|2013-07-19|2016-09-20|Qualcomm Incorporated|Disabling intra prediction filtering| US9883197B2|2014-01-09|2018-01-30|Qualcomm Incorporated|Intra prediction of chroma blocks using the same vector| US20160105685A1|2014-10-08|2016-04-14|Qualcomm Incorporated|Boundary filtering and cross-component prediction in video coding|KR101952726B1|2009-03-23|2019-02-27|가부시키가이샤 엔.티.티.도코모|Image predictive encoding device, image predictive encoding method, image predictive decoding device, and image predictive decoding method| KR20110113561A|2010-04-09|2011-10-17|한국전자통신연구원|Method and apparatus for intra prediction encoding and decoding using adaptive filter| KR101729051B1|2010-07-20|2017-04-21|가부시키가이샤 엔.티.티.도코모|Image prediction decoding device and image prediction decoding method| RU2610294C1|2011-01-12|2017-02-08|Мицубиси Электрик Корпорейшн|Image encoding device, image decoding device, image encoding method and image decoding method| KR20120140181A|2011-06-20|2012-12-28|한국전자통신연구원|Method and apparatus for encoding and decoding using filtering for prediction block boundary| JP2014131162A|2012-12-28|2014-07-10|Nippon Telegr & Teleph Corp <Ntt>|Intra-prediction encoding method, intra-prediction decoding method, intra-prediction encoding device, intra-prediction decoding device, program therefor, and program recorded recording medium| WO2014107073A1|2013-01-04|2014-07-10|삼성전자 주식회사|Method and apparatus for encoding video, and method and apparatus for decoding said video| US20140192866A1|2013-01-09|2014-07-10|Mitsubishi Electric Research Laboratories, Inc.|Data Remapping for Predictive Video Coding| JP6473078B2|2013-04-05|2019-02-20|シャープ株式会社|Image decoding device| KR102217225B1|2013-04-29|2021-02-18|인텔렉추얼디스커버리 주식회사|Method for intra-prediction, and apparatus thereof| US10602155B2|2013-04-29|2020-03-24|Intellectual Discovery Co., Ltd.|Intra prediction method and apparatus| KR102233965B1|2013-07-01|2021-03-30|삼성전자주식회사|Method and apparatus for video encoding with filtering, method and apparatus for video decoding with filtering| CN104427338B|2013-09-07|2019-11-05|上海天荷电子信息有限公司|A kind of method and apparatus of image coding and image decoding using Block- matching| US9855720B2|2013-09-23|2018-01-02|Morphotrust Usa, Llc|Unidirectional opacity watermark| WO2015101329A1|2014-01-02|2015-07-09|Mediatek Singapore Pte. Ltd.|Method and apparatus for intra prediction coding with boundary filtering control| US10477207B2|2014-04-23|2019-11-12|Sony Corporation|Image processing apparatus and image processing method| US9998742B2|2015-01-27|2018-06-12|Qualcomm Incorporated|Adaptive cross component residual prediction| US20160286224A1|2015-03-26|2016-09-29|Thomson Licensing|Method and apparatus for generating color mapping parameters for video encoding| EP3284258A4|2015-05-28|2019-04-17|HFI Innovation Inc.|Method and apparatus for using a current picture as a reference picture| EP3310054A4|2015-06-11|2019-02-27|Intellectual Discovery Co., Ltd.|Method for encoding and decoding image using adaptive deblocking filtering, and apparatus therefor| US10091506B2|2015-06-11|2018-10-02|Sony Corporation|Data-charge phase data compression architecture| US10841593B2|2015-06-18|2020-11-17|Qualcomm Incorporated|Intra prediction and intra mode coding| US20200288146A1|2015-08-17|2020-09-10|Lg Electronics Inc.|Intra-prediction mode-based image processing method and apparatus therefor| JP6599552B2|2015-09-10|2019-10-30|エルジーエレクトロニクスインコーポレイティド|Intra prediction method and apparatus in video coding system| CN108141589B|2015-09-29|2021-08-27|Lg 电子株式会社|Method and device for filtering image in image compiling system| WO2017069591A1|2015-10-23|2017-04-27|엘지전자 주식회사|Method and device for filtering image in image coding system| EP3376764A4|2015-11-12|2019-12-04|LG Electronics Inc.|Method and apparatus for coefficient induced intra prediction in image coding system| KR102346713B1|2016-04-12|2022-01-03|세종대학교산학협력단|Method and apparatus for processing a video signal based on intra prediction| KR20180129863A|2016-04-25|2018-12-05|엘지전자 주식회사|Image decoding method and apparatus in video coding system| ES2724568B2|2016-06-24|2021-05-19|Kt Corp|Method and apparatus for processing a video signal| KR102321394B1|2016-08-01|2021-11-03|한국전자통신연구원|A method for encoding/decoding a video| KR20190040000A|2016-09-05|2019-04-16|엘지전자 주식회사|Image encoding / decoding method and apparatus therefor| KR20180031614A|2016-09-20|2018-03-28|주식회사 케이티|Method and apparatus for processing a video signal| WO2018062702A1|2016-09-30|2018-04-05|엘지전자 주식회사|Intra prediction method and apparatus in image coding system| WO2018066849A1|2016-10-04|2018-04-12|한국전자통신연구원|Method and device for encoding/decoding image, and recording medium storing bit stream| WO2018066958A1|2016-10-04|2018-04-12|주식회사 케이티|Method and apparatus for processing video signal| US10958903B2|2016-10-04|2021-03-23|Electronics And Telecommunications Research Institute|Method and apparatus for encoding/decoding image and recording medium storing bit stream| JP6895247B2|2016-10-06|2021-06-30|日本放送協会|Coding device, decoding device and program| EP3528497A4|2016-10-14|2020-07-22|Industry-Academia Cooperation Foundation of Sejong University|Image encoding/decoding method and device| US10911756B2|2016-10-28|2021-02-02|Electronics And Telecommunications Research Institute|Video encoding/decoding method and apparatus, and recording medium in which bit stream is stored| WO2018080135A1|2016-10-28|2018-05-03|한국전자통신연구원|Video encoding/decoding method and apparatus, and recording medium in which bit stream is stored| US20190268594A1|2016-11-28|2019-08-29|Electronics And Telecommunications Research Institute|Method and device for filtering| US10728548B2|2017-04-04|2020-07-28|Futurewei Technologies, Inc.|Processing reference samples used for intra-prediction of a picture block| JP6902604B2|2017-04-06|2021-07-14|パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America|Coding device and decoding device| CN108881907A|2017-05-16|2018-11-23|富士通株式会社|Pixel filter method and apparatus and method for video coding for coding and decoding video| CA3060033A1|2017-05-31|2018-12-06|Lg Electronics Inc.|Method and device for performing image decoding on basis of intra prediction in image coding system| JP6770192B2|2017-06-01|2020-10-14|パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America|Encoding device, coding method, decoding device and decoding method| WO2019078629A1|2017-10-18|2019-04-25|한국전자통신연구원|Image encoding/decoding method and device, and recording medium having bitstream stored therein| CN111543054A|2017-10-31|2020-08-14|三星电子株式会社|Image encoding method and device, image decoding method and device| CN107801024B|2017-11-09|2019-07-12|北京大学深圳研究生院|A kind of boundary filtering method for intra prediction| CN107896330B|2017-11-29|2019-08-13|北京大学深圳研究生院|It is a kind of in frame and the filtering method of inter-prediction| CA3105432A1|2018-01-15|2019-07-18|Ki Baek Kim|Intra prediction encoding/decoding method and apparatus for chrominance components| WO2019203559A1|2018-04-17|2019-10-24|엘지전자 주식회사|Method and device for decoding image by using regression model-based filtering in image coding system| US10630979B2|2018-07-16|2020-04-21|Tencent America LLC|Reference sample padding and filtering for intra prediction in video compression| US11159789B2|2018-10-24|2021-10-26|City University Of Hong Kong|Generative adversarial network based intra prediction for video coding| WO2020130628A1|2018-12-18|2020-06-25|엘지전자 주식회사|Method for coding image on basis of multiple reference line intra prediction and apparatus therefor| KR20210011458A|2019-01-10|2021-02-01|후아웨이 테크놀러지 컴퍼니 리미티드|Deblocking filter for subpartition boundary due to intra subpartition coding tool| CN111787334B|2020-05-29|2021-09-14|浙江大华技术股份有限公司|Filtering method, filter and device for intra-frame prediction| JP2020167729A|2020-06-24|2020-10-08|日本放送協会|Encoder, decoder, and program| KR102189259B1|2020-08-20|2020-12-09|인텔렉추얼디스커버리 주식회사|Method for intra-prediction, and apparatus thereof|
法律状态:
2019-08-20| B15I| Others concerning applications: loss of priority|Free format text: PERDA DA PRIORIDADE KR10-2012-0066206, DE 20/06/2012, REIVINDICADA NO PCT/KR2012/004883, DE 20/06/2012, TENDO EM VISTA O PRAZO DEFINIDO NO ART. 4O DA CONVENCAO DA UNIAO DE PARIS. | 2019-08-27| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]| 2020-04-22| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]| 2021-10-05| B350| Update of information on the portal [chapter 15.35 patent gazette]| 2022-02-01| B15K| Others concerning applications: alteration of classification|Free format text: A CLASSIFICACAO ANTERIOR ERA: H04N 7/34 Ipc: H04N 19/117 (2006.01), H04N 19/593 (2006.01) |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 KR20110059850|2011-06-20| KR20110065708|2011-07-01| KR1020110119214A|KR20120140181A|2011-06-20|2011-11-15|Method and apparatus for encoding and decoding using filtering for prediction block boundary| KR1020110125353A|KR20120140182A|2011-06-20|2011-11-28|Method and apparatus for encoding and decoding using filtering for prediction block boundary| KR1020120066206A|KR101357640B1|2011-06-20|2012-06-20|Method for image encoding/decoding and apparatus thereof| PCT/KR2012/004883|WO2012177053A2|2011-06-20|2012-06-20|Image encoding/decoding method and apparatus for same| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|