![]() video decoding and encoding methods and devices, computer software, non-transitory storage medium, d
专利摘要:
VIDEO DECODING AND ENCODING METHODS AND APPLIANCES, RECORDING MEDIA, DATA SIGNAL, AND CAPTURE EQUIPMENT, DISPLAY, TRANSMISSION, RECEPTION AND / OR VIDEO STORAGEA method of encoding or decoding video using interimage prediction to encode input video data where each chrominance component has 1 / Mth of the horizontal resolution of the luminance component and 1 / Nth of the vertical resolution of the luminance component , where M and N are integers equal to 1 or more, comprises: storing one or more images preceding a current image; interpolate a higher resolution version of prediction units for stored images so that the luminance component of an interpolated prediction unit has a horizontal resolution P times that of the corresponding portion of the stored image and a vertical resolution Q times that of the corresponding portion of the stored image, where P and Q are integers greater than 1; detect interimage movement between a current image and one or more stored images interpolated in order to generate vectors of movement between a prediction unit of the current image and areas of one or more previous images; and generating a motion compensated prediction from the current image prediction unit with respect to an area of an interpolated stored image pointed to by a respective motion vector; where the interpolation step comprises: applying an xS horizontal and vertical xS interpolation filter to the chrominance components of a stored image to generate an interpolated chrominance prediction unit, where R is equal to (U x M x P) and S is equal to (V x N x Q), U and V being integers equal to 1 or more; and subsample the interpolated chrominance prediction unit, such that its horizontal resolution is divided by a U factor and its vertical resolution is divided by a V factor, thereby resulting in a block of MP x NQ samples. 公开号:BR112014026035A2 申请号:R112014026035-4 申请日:2013-04-26 公开日:2020-06-30 发明作者:James Alexander GAMEI;Nicholas Ian Saunders;Karl James SHARMAN;Paul James Silcock 申请人:Sony Corporation; IPC主号:
专利说明:
[0001] [0001] The present application claims the benefit of the previous filing date of GB1211072.2, GB1211073.0 and GB 1207459.7 deposited with the UK Intellectual Property Office on 22 June 2012, 22 June 2012 and 26 April 2012 respectively, the entire contents of which orders are incorporated here by reference. [0002] [0002] This description relates to data encoding and decoding. Description of the Related Art [0003] [0003] The "foundation" description provided here is for the purpose of generally presenting the context of the description. Work of the inventors now named, to the extent that is described in this plea, as well as aspects of the description that may not otherwise qualify as prior art at the time of filing, nor is it expressly or implicitly admitted as a prior art against the present description. [0004] [0004] There are several systems for encoding and decoding video data that involve turning video data into a frequency domain representation, quantizing the frequency domain coefficients and then applying some form of entropy coding to the quantized coefficients. This can achieve compression of the video data. A corresponding decoding or decompression technique is applied to recover a reconstructed version of the original video data. [0005] [0005] Current video codecs (encoders-decoders) such as those used in Advanced Video Encoding H.264 / MPEG-4 (AVC) achieve data compression mainly only by encoding the 'differences between successive video frames. These codecs use an arrangement. regular so-called macroblocks, each of which is used as a comparison region with a corresponding macroblock in a previous video frame, and the image region within the macroblock is then encoded according to the degree of movement found between the current macroblocks and corresponding priors in the video sequence, or between neighboring macroblocks within a single frame of the video sequence. [0006] [0006] High Efficiency Video Coding (HEVC), also known as H.265 or MPEG-H part 2, is a proposed successor to H.264 / MPEG-4 AVC. HEVC is planned to improve video quality and double the data compression ratio compared to H.264, and for this to be scalable from 128x96 resolution to 7680x4320 pixels, approximately equivalent to bit rates ranging from 128 kbit / s to 800 Mbit /s. [0007] [0007] In HEVC, a so-called 4: 2: 0 block structure is proposed for consumer equipment, in which the amount of data used in each chroma channel is a quarter of that in the luma channel. This is because subjectively people are more sensitive to variations in brightness than variations in color, and thus it is possible to use greater compression and / or less information in the color channels without a subjective loss of quality. [0008] [0008] HEVC replaces the macroblocks found in existing H.264 and MPEG standards with a more flexible scheme based on coding units (CUSs), which are structures of variable size. [0009] [0009] Consequently, when encoding image data in video frames, CU sizes can be selected responsive to apparent image complexity or detected levels of motion, rather than using evenly distributed macroblocks. Consequently, [00011] [00011] In addition, PU and TU blocks are provided for each of three channels; luma (Y), being a luminance or brightness channel, which can be thought of as a gray scale channel, and two channels of color difference or chrominance (chroma); Cb and Cr. These channels provide the color for the gray scale image of the luma channel. The terms Y, luminance and luma are used interchangeably in this description, and similarly the terms Cb and Cr, chrominance and chroma, are used interchangeably as appropriate, noting that chrominance or chroma can be used generically for "one or both of Cr and Cb" , while when a specific chrominance channel is being discussed it will be identified by the term Cb or Cr. [00012] [00012] PUs are generally considered to be channel independent, except that a PU has a luma part and a chroma part. This generally means that the samples forming part of the PU for each channel represent the same region of the image, so that there is a fixed relationship between the PUs between the three channels. For example, for 4: 2: 0 video, an 8x8 PU for Luma always has a corresponding 4x4 PU for chroma, with the chroma parts of the PU representing the same area as the luma part, but containing a smaller number of pixels because of the subsampled nature of the 4: 2: 0 chroma data compared to the 4: 2: 0 video luma data. The two chroma channels share intrapredictive information; and the three channels share interpretation information. [00013] [00013] However, for professional broadcasting and digital cinema equipment, it is desirable to have less compression (or more information) on us. chroma channels, and this can affect how current and proposed HEVC processing operates. SUMMARY [00014] [00014] The present description addresses or reduces problems arising from this processing. [00015] [00015] Aspects and respective characteristics of the present description are defined in the attached claims. [00016] [00016] It is to be understood that both the preceding general description and the following detailed description are exemplary, but are not restrictive of the present technology. BRIEF DESCRIPTION OF THE DRAWINGS [00017] [00017] A more complete appreciation of the description and many of the benefits arising from it will be obtained promptly when it is better understood by reference to the following detailed description when considered in relation to the accompanying drawings, in which: Figure 1 schematically illustrates a transmission system and receiving audio / video (A / V) data using video data compression and decompression; Figure 2 schematically illustrates a video display system using decompression of video data; Figure 3 schematically illustrates an audio / video storage system using compression and decompression of video data; Figure 4 schematically illustrates a video camera using video data compression; [00018] [00018] Figure 14 schematically illustrates a set of possible intraprediction directions; Figure 15 schematically illustrates a set of prediction modes; Figure 16 schematically illustrates an up-right diagonal scan; Figure 17 schematically illustrates a video compression device; Figures 18a and 18b schematically illustrate possible block sizes; Figure 19 schematically illustrates the use of colocalized information from chroma and luma blocks; Figure 20 schematically illustrates a situation in which colocalized information from one chroma channel is used in relation to another chroma channel; Figure 21 schematically illustrates pixels used for an 'LM-CHROMA mode; . Figure 22 schematically illustrates a set of luma prediction directions; Figure 23 schematically illustrates the directions of Figure 22, as applied to a horizontally dispersed chroma channel; Figure 24 schematically illustrates the directions of Figure 22 mapped to a rectangular chroma pixel array; Figures 25-28 schematically illustrate pixel interpolation of luma and chroma; Figures 29a and 2b schematically illustrate quantization parameter tables for 4: 2: 0 and 4: 2: 2; Figures 30 and 31 respectively schematically illustrate quantization variation tables; DESCRIPTION OF THE PREFERRED EMBODIMENTS [00019] [00019] Referring now to the drawings, Figures 1-4 are provided to give schematic illustrations of apparatus or systems using the compression and / or decompression apparatus to be described below with respect to the embodiments of the present technology. [00020] [00020] Everything from the data compression and / or decompression apparatus to be described below can be implemented in hardware, in software running on a general purpose data processing device such as a general purpose computer, such as programmable hardware such as an application-specific integrated circuit (ASIC) or field programmable port arrangement (FPGA) or as combinations of these. In cases where the embodiments are implemented through software and / or firmware, it will be appreciated that such software and / or firmware, and non-transitory data storage media by which such software and / or firmware are stored or otherwise provided, are considered as embodiments of the present technology. [00022] [00022] An incoming audio / video signal 10 is provided to a video data compression apparatus 20 which compresses at least the video component of the audio / video signal 10 for transmission along a transmission route 30 such such as a cable, an optical fiber, a wireless connection, or the like. The compressed signal is processed by a decompression device 40 to provide an output audio / video signal 50. For the return path, a compression device 60 compresses an audio / video signal for transmission along the transmission route 30 for a decompression device 70. [00023] [00023] The compression apparatus 20 and decompression apparatus 70 can therefore form a node of a transmission connection. The pressure relief device 40 and pressure relief device 60 can form another node of the transmission connection. Certainly, in cases where the transmission link is unidirectional, one of the nodes would require a compression device and the other node would only require a decompression device. [00024] [00024] Figure 2 schematically illustrates a video display system using decompression of video data. In particular, a compressed audio / video signal 100 is processed by a decompression device 110 to provide an uncompressed signal that can be displayed on a display [00025] [00025] Figure 3 schematically illustrates an audio / video storage system using compression and decompression of video data. An input audio / video signal 130 is provided to a compression apparatus 140 that generates a compressed signal for storage by a storage device 150 such as a magnetic disk device, an optical disk device, a magnetic tape device, a solid state storage device such as a semiconductor memory or other storage device. For repetition, compressed data is read from the storage device 150 and passed to a decompression device 160 for decompression to provide an output audio / video signal 170. [00026] [00026] It will be appreciated that the compressed or encoded signal, and a storage medium storing that signal, are considered to be embodiments of the present description. [00027] [00027] Figure 4 schematically illustrates a video camera using video data compression. In Figure 4, an image capture device 180, such as a charge-coupled device image sensor (CCD) and associated reading electronics and control, generates a video signal that is passed to a 190 compression device. microphone (or several microphones) 200 generates an audio signal to be passed to the compression apparatus 190. The compression apparatus 190 generates a compressed audio / video signal 210 to be stored and / or transmitted (shown generically as a schematic phase 220 ). [00028] [00028] The techniques to be described below relate mainly to the compression and decompression of video data. It will be appreciated that many existing techniques can be used for audio data compression along with the video data compression techniques that. described, to generate a compressed audio / video signal. Per . therefore, a separate discussion of audio data compression will not be provided. It will also be appreciated that the data rate associated with video data, in particular broadcast quality video data, is generally much higher than the data rate associated with audio data (whether compressed or uncompressed). It will therefore be appreciated that uncompressed audio data could accompany compressed video data to form a compressed audio / video signal. It will be further appreciated that although the present examples (shown in Figures 1-4) relate to audio / video data, the techniques to be described below may find use in a system that simply reads (that is, compresses, decompresses, stores , displays and / or transmits) video data. That is, the embodiments can apply video data compression without necessarily having any associated audio data manipulating at all. [00029] [00029] Figure 5 provides a schematic overview of a video data compression and decompression device. [00030] [00030] A 343 controller controls the overall operation of the device and, in particular when referring to a compression mode, controls the test coding processes (to be described below) acting as a selector to select various operating modes such as block sizes of CU, PU and TU and whether the video data is to be encoded without loss or otherwise. Successive images of an input video signal 300 are provided to an adder 310 and an image predictor 320. The image predictor 320 will be described in more detail below with reference to Figure 6. Adder 310 actually performs an image operation. subtraction (negative addition), since it receives the input video signal 300 at a "+" input and the image predictor output 320 at a "-" input, so that the predicted image 7 is subtracted from the input image . The result is to generate a so-called residual image signal 330 representing the difference between the real and predicted images. [00031] [00031] One reason why a residual image signal is generated is as follows. The data encoding techniques to be described, that is, the techniques that will be applied to the residual image signal, tend to work more effectively when there is less "energy" in the image to be encoded. Here, the term "effectively" refers to the generation of a small amount of encrypted data; for a particular level of image quality, it is desirable (and considered "efficient") to generate as little data as practically possible. The reference to "energy" in the residual image relates to the amount of information contained in the residual image. If the predicted image were to be identical to the real image, the difference between the two (that is, the residual image) would contain zero information (zero energy) and would be very easy to encode into a small amount of encoded data. In general, if the prediction process can be made to work reasonably well, the expectation is that the residual image data will contain less information (less energy) than the input image and thus it will be easier to encode into a small amount of encoded data. . [00032] [00032] The rest of the device acting as an encoder (to encode the residual or difference image) will now be described. Residual image data 330 is provided to a transform unit 340, which generates a discrete cosine transform (DCT) representation of the residual image data. The DCT technique itself is well known and will not be described in detail here. However, there are aspects of the techniques used in the present apparatus that will be described in more detail below, in particular concerning the selection of different blocks of data to which the DCT operation is applied. These will be discussed with reference to Figures 7-12 below. [00034] [00034] A data scanning process is applied by a 360 scanning unit. The purpose of the scanning process is to rearrange the quantized transformed data so as to join as much as possible of the non-zero quantized transformed coefficients together, and certainly therefore to join as much as possible of the coefficients of zero value together. These characteristics can allow so-called running length coding techniques or the like to be applied effectively. Thus, the scanning process involves selecting coefficients from the quantized transformed data, and in particular from a block of coefficients corresponding to a block of image data that have been transformed and quantized, according to a "scan order", so that ( a) all coefficients are selected once as part of the scan, and (b) the scan tends to provide the desired reordering. An example scan order that may tend to give useful results is called an up-right diagonal scan order. [00035] [00035] The scanned coefficients are then passed to an entropy encoder (EE) 370. Again, several types of entropy coding. can be used. Two examples are variants of the so-called. CABAC (Context-Adaptive Binary Arithmetic Coding) and variants of the so-called Context-Adaptive Variable Length Coding system. In general terms, CABAC is considered to provide better efficiency, and in some studies it has been shown to provide a 10-20% reduction in the amount of output data encoded for comparable image quality compared to CAVLC. However, CAVLC is considered to represent a much lower level of complexity (in terms of its implementation) than CABAC. Note that the scanning process and the entropy coding process are shown as separate processes, but in reality they can be combined or treated together. That is, the data reading in the entropy encoder can happen in the scan order. Corresponding considerations apply to the respective reverse processes to be described below. Note that the current HEVC documents under consideration at the time of filing no longer include the possibility of a CAVLC coefficient encoder. [00036] [00036] The entropy encoder 370 output, along with additional data (mentioned above and / or discussed below), for example defining the way in which predictor 320 generates the predicted image, provides a compressed output video signal 380. [00037] [00037] However, a return path is also provided because the operation of the predictor 320 itself depends on a decompressed version of the compressed output data. [00038] [00038] The reason for this feature is as follows. At the appropriate stage in the decompression process (to be described below), a decompressed version of the residual data is generated. This uncompressed residual data has to be added to a predicted image to generate an output image (because the original residual data was the difference between the input image and a predicted image). In order for this process * to be comparable, as between the compression side and the: decompression side, the predicted images generated by the predictor 320 should be the same during the compression process and during the decompression process. Certainly, in decompression, the device does not have access to the original input images, but only to uncompressed images. Therefore, in compression, predictor 320 bases its prediction (at least for inter-image coding) on uncompressed versions of the compressed images. [00039] [00039] The entropy coding process performed by the entropy coder 370 is considered to be "lossless", that is, it can be inverted to arrive at exactly the same data that was first provided to the entropy coder 370. Thus, the path feedback can be implemented before the entropy coding phase. Indeed, the scanning process performed by the scanning unit 360 is also considered lossless, but in the present embodiment, the return path 390 is from the output of the quantizer 350 to the input of a complementary reverse quantizer 420. [00040] [00040] In general terms, an entropy decoder 410, the inverse scan unit 400, an inverse quantizer 420 and an inverse transform unit 430 provide the respective inverse functions of the entropy encoder 370, the scan unit 360, of the quantizer 350 and transformed unit 340. For now, the discussion will continue through the compression process; the process for decompressing a compressed input video signal will be discussed separately below. [00041] [00041] In the compression process, the scanned coefficients are passed through the return path 390 from the quantizer 350 to the inverse quantizer 420 that performs the reverse operation of the 360 scanning unit. A process of inverse quantization and inverse transformation is performed by the 420 units .430 to generate a compressed-uncompressed residual image signal 440. [00043] [00043] Now returning to the process applied to decompress a received compressed video signal 470, the signal is provided to the entropy decoder 410 and from there to the reverse scan unit chain 400, the reverse quantizer 420 and the reverse transform unit 430 before being added to the output of image predictor 320 by adder 450. In direct terms, output 460 of adder 450 forms the uncompressed output video signal 480. In practice, additional filtering can be applied before the signal is produced . [00044] [00044] Thus, the apparatus of Figures 5 and 6 can act as a compression device or a decompression device. The functions of the two types of device overlap very strongly. Scanning unit 360 and entropy encoder 370 are not used in a decompression mode, and the operation of predictor 320 (which will be described in detail below) and other units follow mode and parameter information contained or otherwise associated with the flow compressed bits received instead of generating such information themselves. [00045] [00045] Figure 6 schematically illustrates the generation of predicted images, and in particular the operation of the image predictor 320. [00046] [00046] There are two basic modes of prediction performed by the image predictor 320: called intraimage prediction and called interimage prediction, or motion compensated prediction (CM). On the encoder side, each involves detecting a prediction direction in relation to a current block to be predicted, and generating a predicted block of samples according to other samples (in the same (intra) or another (inter) image). Due to units 310 or 450, the difference between the predicted block and the current block is coded or applied in order to encode or decode block 1 respectively. [00048] [00048] Motion compensated prediction is an example of interimage prediction and makes use of motion information that attempts to define the source, in another adjacent or close image, of the image detail to be encoded in the current image. Therefore, in an ideal example, the contents of a block of image data in the predicted image can be encoded very simply as a reference (a motion vector) pointing to a corresponding block in it or a slightly different position in an adjacent image. . [00049] [00049] Returning to Figure 6, two prediction image arrangements (corresponding to the intra and inter image prediction) are shown, the results of which are selected by a 500 multiplexer under the control of a 510 mode signal in order to provide blocks of the image predicted for provision to adder 310 and 450. The choice is made depending on which selection gives the lowest "energy" (which, as discussed above, can be considered as information content requiring coding), and the choice is signaled to the encoder within the encoded output data stream. Image energy in this context can be detected, for example, [00050] [00050] The current prediction, in the intra-coding system, is made on the basis of image blocks received as part of signal 460, that is, the prediction is based on coded-decoded image blocks in order that exactly the same prediction can be made to a decompression device. However, data can be derived from the input video signal 300 by an intramode selector 520 to control the operation of the intraimage predictor 530. [00051] [00051] For interimage prediction, a motion compensated (MC) predictor 540 uses motion information such as motion vectors derived by a motion estimator 550 from the input video signal 300. These motion vectors are applied to a processed version of the reconstructed image 460 by the moving compensated predictor 540 to generate interimage prediction blocks. [00052] [00052] Consequently, units 530 and 540 (operating with the estimator 550) each act as detectors to detect a prediction direction in relation to a current block to be predicted, and as a generator to generate a predicted block of samples. (as part of the prediction passed to units 310 and 450) according to the other samples defined by the prediction direction. [00053] [00053] The processing applied to signal 460 will now be described. First, the signal is filtered by a 560 filter unit, which will be described in more detail below. This involves applying a "'unlock" "filter to remove or at least tend to reduce the effects of block-based processing performed by the transform unit. [00054] [00054] Adaptive filtering represents mesh filtering for image restoration. An LCU can be filtered through up to 16 filters, with a choice of filter and an ALF active / inactive state being derived in relation to each CU within the LCU. Control is currently at the LCU level, not the CU level. [00055] [00055] The filtered output of the filter unit 560 actually forms the output video signal 480 when the device is operating as a compression device. It is also temporarily stored in one or more image or frame stores 570; storing successive images is a requirement for processing motion-compensated prediction, and in particular the generation of motion vectors. To save on storage requirements, the images stored in the 570 image stores can be contained in a compressed form and then decompressed for use in generating motion vectors. For this particular purpose, any known compression / decompression system can be used. The stored images are passed to a 580 interpolation filter, which generates a higher resolution version of the stored images; In this example, intermediate samples (sub-samples) are generated such that the resolution of the interpolated image that is produced by the interpolation filter 580 is 4 times (in each dimension) that of the images stored in the image stores 570 for the luminance channel of 4: 2: 0 and 8 times (in each dimension) that of the images stored in the '570 image stores for the 4: 2: 0 chrominance channels. The interpolated images are passed as an input to the motion estimator 550 and also to the motion compensated predictor 540. [00056] [00056] In embodiments of the description, an additional optional phase is provided, which is to multiply the data values of the input video signal by a factor of four using a multiplier 600 (just effectively shifting the data values left by two bits) , and apply a corresponding division operation (right shift by two bits) to the device output using a divider or right shifter 610. Thus, the left shift and right shift changes the data purely for the internal operation of the device. This measurement can provide higher calculation accuracy within the device, as the effect of any data rounding errors is reduced. [00057] [00057] The way in which an image is divided for compression processing will now be described. At a basic level, an image to be compressed is considered to be an arrangement of sample blocks. For the purposes of the present discussion, such a larger block under consideration is a so-called larger coding unit (LCU) 700, which represents a square arrangement of typically 64 x 64 samples (the size of LCU is configurable by the encoder, up to a maximum size as defined by the HEVC documents). Here, the discussion relates to luminance samples. Depending on the chrominance mode, such as 4: 4: 4, 4: 2: 2, 4: 2: 0 or 4: 4: 4: 4 (GBR plus key data), there will be different numbers of corresponding chrominance samples corresponding to the luminance block. [00058] [00058] Three basic types of blocks will be described: coding units, prediction units and transform units. In general terms, the recursive subdivision of LCUs allows an input frame to be divided in such a way that both the block size and the block encoding parameters (such as prediction or residual encoding modes). can be fixed according to the specific characteristics of the image to be encoded. [00059] [00059] The LCU can be subdivided into so-called coding units (CU). Coding units are always square and have a size between 8x8 samples and the full size of the LCU 700. Coding units can be arranged as a type of tree structure, so that a first subdivision can take place as shown in Figure 8, giving 710 coding units of 32x32 samples; Subsequent subdivisions can then take place on a selective basis in order to give some 720 encoding units of 16x16 samples (Figure 9) and potentially some 730 encoding units of 8x8 samples (Figure 10). Overall, this process can provide a coding tree structure adaptable to CU block content, each of which can be as large as the LCU or as small as 8x8 samples. Encoding of the output video data takes place on the basis of the encoding unit structure, that is, that one LCU is encoded, and then the process moves to the next LCU, and so on. [00060] [00060] Figure 11 schematically illustrates an arrangement of prediction units (PU). A prediction unit is a basic unit for carrying information related to image prediction processes, or in other words, the additional data added to the entropy-encoded residual image data to form the output video signal from the device of Figure 5. In general, prediction units are not restricted to being square in shape. They can take other shapes, in particular rectangular shapes forming half of one of the square coding units (for example, 8x8 CUs can have 8x4 or 4x8 PUs). Employing PUs that align with image characteristics is not a compulsory part of the HEVC system, but the general aim would be to allow a good coder to 'align the limit of adjacent prediction units to match (as closely as possible) with the limit of real objects in the frame, so that different prediction parameters can be applied to different real objects. Each coding unit can contain one or more prediction units. [00061] [00061] Figure 12 schematically illustrates an arrangement of transform units (TU). A transform unit is a basic unit of the transform and quantization process. Transform units may or may not be square and can take a size of 4x4 up to 32x32 samples. Each coding unit can contain one or more transform units. The acronym SDIP-P in Figure 12 means a so-called short distance intraprediction partition. In this arrangement, only a dimensional transform is used, so a 4xN block is passed through N transformed with input data for the transforms being based on the neighbor blocks previously decoded and on the neighbor lines previously decoded within the current SDIP-P. SDIP-P is not currently included in HEVC when filing the present order. [00062] [00062] As mentioned above, coding happens as an LCU, then a next LCU, and so on. Within an LCU, encoding is performed CU by CU. Within a CU, encoding is performed for a TU, then a next TU and so on. [00063] [00063] The intraprediction process will now be discussed. In general terms, intraprediction involves generating a prediction from a current block (a prediction unit) of previously coded samples and decoded samples in the same image. Figure 13 schematically illustrates a partially encoded image 800. Here, the image is being encoded from top-left to bottom-right on an LCU basis. An example LCU encoded partially by manipulating the entire image is shown as an 810 block. A shaded region 820 above and to the left S of block 810 has already been encoded. The intraimage prediction of the contents of - block 810 can make use of any of the shaded area 820, but it cannot make use of the non-shaded area below it. Note, however, that for an individual TU within the current LCU, the hierarchical coding order (CU by CU then TU by TU) discussed above means that there may be samples previously coded in the current LCU and available for coding that TU which are, for example, above-right or below-left of this TU. [00064] [00064] Block 810 represents an LCU; as discussed above, for the purposes of intraimage prediction processing, this can be subdivided into a set of prediction units and smaller transform units. An example of a current TU 830 is shown inside the LCU 810. [00065] [00065] The intraimage prediction takes into account samples encoded before the current TU is considered, such as those above and / or to the left of the current TU. Source samples, of which required samples are predicted, may be located at different positions or directions relative to the current TU. To decide which direction is appropriate for a current prediction unit, the mode selector 520 of an example encoder can test all combinations of TU structures available for each candidate direction and select the PU direction and TU structure with the better compression efficiency. [00066] [00066] The table can also be encoded on a "slice" basis. In one example, a slice is a horizontally adjacent group of LCUs. But more generally, the entire residual image could form a slice, or a slice could be a single LCU, or a slice could be a row of LCUSs, and so on. Slices can give some resilience to errors as they are coded as independent units. The encoder and decoder states are completely reset to a slice limit. For example, intraprediction is not performed by slice limits; slice limits are treated as image limits for this purpose. [00068] [00068] In general terms, after detecting a prediction direction in relation to each prediction unit, the systems are operable to generate a predicted block of samples according to other samples defined by the prediction direction. [00069] [00069] Figure 16 schematically illustrates a so-called up-right diagonal scan, being an example scan pattern that can be applied by the 360 scan unit. In Figure 16, the pattern is shown for an example block of coefficients of DCT 8x8, with the CC coefficient being positioned in the top left position 840 of the block, and increasing horizontal and vertical spatial frequencies being represented by coefficients at increasing distances down and to the right of the top-left position 840. Other orders alternative scanning methods can be used instead. [00070] [00070] Variations of block arrangements and structures of CU, PU and [00072] [00072] The 'unlock' filter 1000 attempts to reduce distortion and improve visual quality and prediction performance by rectifying the sharp edges that may form between CU, PU and TU limits when block coding techniques are used. [00073] [00073] The SAO 1010 filter classifies reconstructed pixels into different categories and then tries to reduce distortion by simply adding a Compensation for each category of pixels. Pixel intensity and border properties are used for pixel classification. In order to further improve the coding efficiency, a frame can be divided into regions to locate Compensation parameters. [00074] [00074] ALF 1020 tries to restore the compressed frame such that the difference between the reconstructed and source frames is minimized. ALF coefficients are calculated and transmitted on a table basis. ALF can be applied to the entire framework or to local areas. [00075] [00075] As noted above, the proposed HEVC documents use a particular chroma sampling scheme known as the 4: 2: 0 scheme. The 4: 2: 0 scheme can be used for household / consumer equipment. However, several other schemes are possible. [00076] [00076] In particular, a so-called 4: 4: 4 scheme would be suitable for professional broadcasting, mastering and digital cinema, and in principle would have the highest quality and data rate. [00079] [00079] In addition, other schemes include the 4: 0: 0 monochrome scheme. [00080] [00080] In the 4: 4: 4 scheme, each of the three channels of Y, Cb and Cr has the same sample rate. In principle, therefore, in this scheme there would be twice as much chroma data as luma data. [00081] [00081] Consequently in HEVC, in this scheme, each of the three channels of Y, Cb and Cr would have corresponding TU and PU blocks that are the same size; for example an 8x8 luma block would have corresponding 8x8 chroma blocks for each of the two chroma channels. [00082] [00082] Consequently in this scheme there would generally be a direct 1: 1 ratio between block sizes in each channel. [00083] [00083] In the 4: 2: 2 scheme, the two chroma components are sampled at a half luma sample rate (for example using vertical or horizontal subsampling, but for the purposes of the present description, horizontal subsampling is assumed). In principle, therefore, in this scheme there would be as much chroma data as luma data, however the chroma data would be divided between the two chroma channels. [00084] [00084] Consequently in HEVC, in this scheme, channels Cb and Cr would have PU and TU blocks of different size for the luma channel; for example an 8x8 luma block could have corresponding chroma blocks of 4 x 8 high height for each chroma channel. [00085] [00085] Notably therefore in this scheme, the chroma blocks could be non-square, although they correspond to square luma blocks. '[00086] In the HEVC 4: 2: 0 scheme currently proposed, both. chroma components are sampled at a quarter of the luma sample rate (for example using vertical and horizontal subsampling). In principle, therefore, in this scheme there is as much chroma data as luma data, the chroma data being divided between the two chroma channels. [00087] [00087] Consequently in HEVC, in this scheme, channels Cb and Cr again have PU and TU blocks of different size for the luma channel. For example, an 8x8 luma block would have corresponding 4x4 chroma blocks for each chroma channel. [00088] [00088] The above schemes are known colloquially in the art as 'channel relationships', as in 'a 4: 2: 0 channel relationship'; however, it will be appreciated from the previous description that in reality this does not always mean that channels Y, Cb and Cr are compressed or otherwise provided in that relationship. Consequently, while referring to a channel relationship, this should not be assumed to be literal. In reality, the correct ratios for the 4: 2: 0 scheme are 4: 1: 1 (the ratios for the 4: 2: 2 scheme and the 4: 4: 4 scheme are actually correct). [00089] [00089] Before discussing particular arrangements with reference to Figures 18a and 18b, some general terminology will be summarized or revisited. [00090] [00090] A Larger Coding Unit (LCU) is a root frame object. Typically, it covers the area equivalent to 64 x luma pixels [00091] [00091] The CUs at the end of the tree hierarchy, that is, the "smaller CUs resulting from the recursive division process (which can be called: leaf CUs) are then divided into Prediction Units (PUs). The three channels ( luma and two chroma channels) have the same PU structure, except when the corresponding PU for a chroma channel would have very few samples, in which case only one PU for that channel is available. This is configurable, but generally the dimension minimum of an intra PU is 4 samples, the minimum size of an inter PU is 4 samples of luma (or 2 chroma samples for 4: 2: 0). The restriction on the minimum CU size is always big enough for at least one PU for any channel. [00092] [00092] Leaf CUs are also divided into Transform Units (TUs). TUs can - and, when they are very large (for example, more than 32x32 samples), they should be divided into additional TUs. A limit is applied so that TUs can be divided up to a maximum tree depth, currently configured as 2 levels. that is, there can be no more than 16 TUs for each CU. An illustrative smaller allowable TU size is 4x4 samples and the largest allowable TU size is 32x32 samples. Again, the three channels have the same TU structure wherever possible, but if a TU cannot be divided to a particular depth for a given channel due to the size restriction, it remains at the larger size. The so-called non-square 'quad-tree' (NSQT) transform arrangement is similar, but the method of dividing into four TUs does not have to be 2x2, but it can be 4x1 or 1x4. [00093] [00093] Referring to Figures 18a and 18b, the different possible block sizes are summarized for CU, PU and TU blocks, with Y referring to luma blocks and C referring in a generic sense to a representative of the blocks of chroma, and the numbers referring to pixels. 'Inter' [00097] [00097] Note that 64x64 is currently a maximum CU size, but this restriction could change. [00098] [00098] Within each row 1100 ... 1130, different PU options are shown applicable to that CU size. The TU options applicable to those PU configurations are shown horizontally aligned with the respective PU options). [00099] [00099] Note that in several cases, multiple PU options are provided. As discussed above, the aim of the device in selecting a PU configuration is to match (as closely as possible) the limit of real objects in the frame, so that different prediction parameters can be applied to different real objects. [000100] [000100] Block sizes and shapes and PUs are an encoder-based decision, under the control of the 343 controller. The current method involves conducting trials of many TU tree structures in many directions, acquiring the best "cost" at each level. Here, the cost can be expressed as a measure of the distortion, or noise, or errors, or bit rate resulting from each block structure. Thus, the encoder can try two or more (or even all available) permutations of block sizes and shapes within those allowed under the tree structures and hierarchies: discussed above, before selecting one of the attempts that gives the bit rate . lowest for a certain measure of required quality, or the lowest distortion (or errors, or noise, or combinations of these measures) for a rate of. bit required, or a combination of these measures. [000101] [000101] Given the selection of a particular PU configuration, several levels of division can be applied to generate the corresponding TUs. Referring to row 1100, in the case of a 64x64 PU, this block size is too large for use as a TU and so a first level of division (from "level 0" (do not divide) to level 1 ") is mandatory, resulting in an arrangement of four 32x32 luma TUs, each of which can be subjected to further division into a tree hierarchy (from "level 1" to level 2 ") as required, with the division being performed before transforming or quantizing that TU is executed. The maximum number of levels in the TU tree is limited (for example) by HEVC documents. [000102] [000102] Other options are provided for PU sizes and shapes in the case of a 64x64 luma pixel CU. These are restricted to use only with inter-coded frames and, in some cases, with the so-called AMP option enabled. AMP refers to the Asymmetric Movement Division and allows PUs to be divided asymmetrically. [000103] [000103] Similarly, in some cases, options are provided for TU sizes and shapes. If NQST (non-square quad-tree transform, basically allowing a non-square TU) is enabled, then division at level 1 and / or level 2 can be performed as shown, while if NQST is not enabled, the sizes of TU follow the division pattern of the respective largest TU for that CU size. [000104] [000104] Similar options are provided for other CU sizes. [000105] [000105] In addition to the graphical representation shown in Figures 18a and 18b, the numerical part of the same information is provided in the following table, however the presentation in Figures 18a and 18b is considered definitive. '"n / a" indicates a mode that is not allowed. The horizontal pixel size is. recited first. If a third figure is given, it relates to the 'number of cases of that block size, as in blocks (horizontal) x (vertical) x. (number of cases). N is an integer. CU 64x64 64x32x2 (configuration on 32x32x4 32x8x4 horizontal) 64x16 + 64x48 horizontal configuration 2 32x64x2 (configuration on 32x32x4 8x32x4 vertical) 16x64 + 48x64 vertical configuration 2 32x32 32x16x2 (configuration on 32x8x4 16x4x4 (luma) + horizontal) 4 × 4 2: 0 or 32x8 + 32x24 4: 2: 2) or 8x4x4 (chroma, horizontal configuration 2 4222 16x32x2 (8x32x4 configuration 4x16Xx4 (luma) + 4x4x4 vertical) (chroma) 8x32 + 24x32 vertical configuration 2 4x8x4 (chroma: 16x8x2 ( configuration in 16x4x4 (luma) 4x4x4 (luma) + 4x8x] 1 horizontal) + 4x8x4 (chroma) (4: 2: 0 | (chroma) (4: 2: 0 or 4: 2: 2) 16x4 + 16x12 or 4: 2: 2) 4x4x4 (luma) + 8x4x1 (horizontal configuration 2) 16x4x4 (luma) + (chroma) 8x4x4 (chroma) (4: 2: 2) 4: 2: 2 8x16x2 (vertical configuration) 4x16 + 12x16 vertical configuration 2 8x8 8x8 8x8 4x4x4 (luma) + A4x4x4 4x8x1 (chroma) na 8x4x2 (horizontal configuration) 4x8x2 (vertical chroma blind configuration) 4x8x1 (chroma) n / a Block structure variants 4: 2: 0, 4: 2: 2 and 4: 4: 4 [000106] [000106] It was appreciated that both 4: 2: 0 and 4: 4: 4 schemes have square PU blocks for intrapredictive coding. In addition, the 4: 2: 0 scheme currently allows 4x4 pixel PU and TU blocks. [000107] [000107] In embodiments, it is therefore proposed that for the 4: 4: 4 scheme, recursion for CU blocks is allowed up to 4x4 pixels instead of 8x8 pixels, provided that as noted above, in 4: 4: 4 mode , the blocks "of luma and chroma will be the same size (that is, the chroma data are not undersampled), and so for a CU 4x4, no PU or TU need be: less than the minimum allowable 4x4 pixels. This is therefore an example of selecting, for a particular coding unit, a size and shape of one or more prediction units, each comprising samples of luminance and chrominance from at least a subset of that coding unit, selecting the size and form of prediction unit being the same for luminance samples and for chrominance samples. [000108] [000108] Similarly, in the 4: 4: 4 scheme, in one embodiment of the present description, each of the channels Y, Cr, Cb, or the Y and the two channels Cr, Cb together, could have respective CU tree hierarchies . A flag can then be used to signal which hierarchy or arrangement of hierarchies is to be used. This approach could also be used for an RGB 4: 4: 4 color space scheme. However, in an alternative, the tree hierarchies for chroma and luma may instead be independent. [000109] [000109] In the example of an 8x8 CU in the 4: 2: 0 scheme, this results in four PU luma 4x4 and one PU chroma 4x4. Consequently in the 4: 2: 2 scheme, having twice as much chroma data, an option in this case is to have two 4x4 chroma PUs, where (for example) the background chroma block would correspond in position to the left bottom luma block . However, it was appreciated that using a non-square 4x8 chroma PU in this case would be more consistent with arrangements for the 4: 2: 0 chroma format. [000110] [000110] In the 4: 2: 0 scheme, there are in principle some non-square TU blocks allowed for certain interpreting coding classes, but not for intrapredicting coding. But in bh coding. 32/91 interpretation, when non-square 'quad-tree' (NSQT) transforms are disabled (which is the current standard for the 4: 2: 0 scheme), all TUs are square. Consequently, in effect, the 4: 2: 0 scheme currently requires square TUs. For example, a 4: 2: 0 16x16 luma TU would correspond with respective 4: 2: 0 Chroma TUs: 8x8 Cb and Cr. [000112] [000112] For example, while a 4: 2: 2 16x16 luma TU could correspond with two respective Chroma 4: 2: 2 Tx 8C8 and Cr TUs, in this embodiment it could instead correspond with 4: 2: 2 8x16 Chroma TUs. Respective Cb and Cr. [000113] [000113] Similarly, four TUs of 4: 2: 2 4x4 luma could correspond with two respective 4: 2: 24x4 Cb + Cr TUs, or in this embodiment it could instead correspond with respective 4: 2: 2 4x8 Cb and Cr TUs. [0001 14] Having non-square chroma TUs, and therefore less TUs, can be more efficient as they are likely to contain less information. However, this can affect the transformation and scanning processes of such TUs, as will be described later. [000115] [000115] Finally, for the 4: 4: 4 scheme, it may be preferable to have the TU structure independent of channel, and selectable at the level of sequence, frame, piece or thinner. [000116] [000116] As noted above, NSQT is currently disabled in the HEVC 4: 2: 0 scheme. However, if for interframe prediction, NSQT is enabled and asymmetric motion division (AMP) is allowed, this allows PUs to be divided asymmetrically; so for example a 16x16 CU can have a 4x16 PU and a 12x16 PU. In these circumstances, additional block structure considerations are important for each of the 4: 2: 0 and 4: 2: 2 schemes. [000117] [000117] For the 4: 2: 0 scheme, in NSQT, the minimum width / height of a TU can be restricted to 4 luma / chroma samples: '[000118] Consequently, in a non-limiting example, one. 16x4 / 16x12 luma PU structure has four 16x4 luma TUs and four 4x4 chroma TUs, where the luma TUs are in a block arrangement. vertical 1x4 and the chroma TUs are in a 2x2 block arrangement. [000119] [000119] In a similar arrangement where the division was vertical rather than horizontal, a 4x16 / 12x16 luma PU structure has four 4x16 luma TUs and four 4x4 chroma TUs, where the luma TUs are in a horizontal arrangement of 4x1 block and chroma TUs are in a 2x2 block arrangement. [000120] [000120] For the 4: 2: 2 scheme, in NSQT as a non-limiting example, a 4x16 / 12x16 luma PU structure has four 4x16 luma TUs and four 4x8 chroma TUs, where the luma TUs are in one 4x1 horizontal block arrangement; the chroma TUs are in a 2x2 block arrangement. [000121] [000121] However, it was appreciated that a different structure could be considered for some cases. Consequently in one embodiment of the present description, in NSQT as a non-limiting example, 16x4 / 16x12 luma PU structure has four 16x4 luma TUs and four 8x4 chroma TUs, but now the luma and chroma TUs are in an arrangement of vertical 1x4 block, aligned with the PU layout (instead of the 4: 2: 0 style arrangement of four 4x8 chroma TUs in a 2x2 block arrangement). [000122] [000122] Similarly, PU 32x8 can have four 16x4 luma TUs and four 8x4 chroma TUs, but now the luma and chroma TUs are in a 2x2 block arrangement. [000123] [000123] Consequently more generally, for the 4: 2: 2 scheme, in NSQT, the TU block sizes are selected to align with the asymmetric PU block layout. Consequently, NSQT profitably allows TU limits to align with PU limits, which reduces high-frequency techniques that may otherwise occur. [000124] [000124] In general terms, embodiments of the description may 'relate to a method of encoding video, apparatus or program. operable in relation to images of a 4: 2: 2 format video signal. An 'image to be coded is divided into coding units, prediction units and transform units for coding, a coding unit being a square arrangement of luminance samples and the corresponding chrominance samples, with one or more prediction units in a coding unit, and having one or more transform units in a coding unit; where a prediction unit is an elementary prediction unit so that all samples within a single prediction unit are predicted using a common prediction technique, and a transform unit is a basic transformation and quantization unit. [000125] [000125] A non-square transformed mode (such as an NSQT mode) is allowed to allow non-square prediction units. Optionally, asymmetric division of motion is enabled in order to allow asymmetry between two or more prediction units corresponding to a single coding unit. [000126] [000126] Controller 343 controls the selection of transform unit block sizes to align with the prediction unit block layout, for example by detecting image characteristics in the image portion corresponding to a PU and selecting TU block sizes in relation to that PU in order to align TU boundaries with edges of image characteristics in the image portion. [000127] [000127] The rules discussed above dictate that combinations of block sizes are available. The encoder can only try different combinations. As discussed above, an attempt can include two or more, for all the options available. The tentative coding processes can be performed according to a cost function metric and a result selected according to a function evaluation of: cost. [000129] [000129] Another possibility is that some encoders may use a fixed choice of block configuration, or may allow a limited subset of the combinations shown in the discussions above. Intrapredition Intrapredition 4: 2: 0 [000130] [000130] Returning now to Figure 22, for intraprediction, HEVC allows prediction of angular chroma. [000131] [000131] By way of introduction, Figure 22 illustrates 35 prediction modes applicable to luma blocks, 33 of which specify directions for reference samples for a current predicted sample position 110. The remaining two modes are mode 0 (planar) and mode 1 (cc). [000132] [000132] HEVC allows chroma to have DC, Vertical, Horizontal, Planar, DM CROMA and LM CHROMA modes. [000133] [000133] DM CHROMA indicates that the prediction mode to be used is the same as that of the colocalized luma PU (that is, one of the 35 shown in Figure 22). [000134] [000134] LM CHROMA (chroma linearly) indicates that colocalized luma samples (subsampled as appropriate to channel relationships) are used to derive the predicted chroma samples. In this case, if the PU of: luma from which the DM CHROMA prediction mode would be brought to selected CC f, Vertical, Horizontal or Planar, that entry in the chroma prediction list using is replaced with 34 mode. In LM CHROMA mode , the luma pixels from which the chroma pixels are predicted are graded (and have compensation applied if appropriate) according to a linear relationship between luma and chroma. This linear relationship is derived from surrounding pixels, and the derivation can be performed on a block-by-block basis, with the decoder finishing decoding one block before moving on to the next. [000135] [000135] It is notable that prediction modes 2-34 show an angular range from 45 degrees to 225 degrees; that is, a diagonal half a square. This is useful in the case of the 4: 2: 0 scheme, as noted above, only uses square chroma PUs for intraframe prediction. Variants of intraprediction 4: 2: 2 [000136] [000136] However, also as noted above, the 4: 2: 2 scheme could have rectangular (non-square) chroma PUs even when the luma PUs are square. Or really, the opposite could be true: a rectangular PU PU could correspond to a square chroma PU. The reason for the discrepancy is that in 4: 2: 2, the chroma is subsampled horizontally (relative to the luma), but not vertically. Thus, the aspect ratio of a luma block and a corresponding chroma block would be expected to be different. [000137] [000137] Consequently, in one embodiment, for chroma PUs having a different aspect ratio to the corresponding luma block, [000138] [000138] Consequently more generally, for non-square PUs, a different mapping between the direction of the reference sample and the selected intraprediction mode can be provided compared to that for square PUs. [000139] [000139] More generally still, any of the modes, including non-directional modes, can also be remapped based on empirical evidence. [000140] [000140] It is possible that such a mapping will result in a many-to-one ratio, making the specification of the complete set of modes redundant for chroma PUs 4: 2: 2. In this case, for example, it may be that only 17 modes (corresponding to half angled resolution) are needed. Alternatively or in addition, these modes can be distributed angularly in a non-uniform manner. [000141] [000141] Similarly, the grinding filter used in the reference sample when predicting the pixel at the sample position can be used differently; in the 4: 2: 0 scheme it is only used to rectify luma pixels, but not chroma pixels. However, in 4: 2: 2 and 4: 4: 4 schemes, this filter can also be used for chroma PUs. In the 4: 2: 2 scheme, the filter can again be modified in response to the different aspect ratio of the PU, for example only being used for a subset of close horizontal modes. An example subset of methods is preferably 2-18 and 34, or more preferably 7-14. In 4: 2: 2, the grinding of only the left column of reference samples can be performed in embodiments. [000142] [000142] These arrangements are discussed in more detail later. [000143] [000143] In the 4: 4: 4 scheme, the chroma and luma PUs are the same size, so the intraprediction mode for a chroma PU can be 'both the same as the colocalized luma PU (thus saving some. data in the bit stream not having to encode a separate mode), or alternatively, can be selected independently. [000145] [000145] In a first example, the PUs of Y, Cb and Cr can all use the same intraprediction mode. [000146] [000146] In a second example, the PU of Y can use an intraprediction mode, and the PUs of Cb and Cr both use another independently selected intraprediction mode. [000147] [000147] In a third example, the PUs of Y, Cb and Cr each use an independently selected respective intraprediction mode. [000148] [000148] It will be appreciated that having independent prediction modes for the chroma channels (or each chroma channel) will improve the color prediction accuracy. But this is at the expense of additional auxiliary data to communicate independent prediction modes as part of the encoded data. [000149] [000149] To alleviate this, the selection of the number of modes could be indicated in the high level syntax (for example at the level of sequence, frame, or slice). Alternatively, the number of independent modes could be derived from the video format; for example, GBR could have up to 3, while YCbCr could be restricted to up to 2. [000150] [000150] In addition to independently selecting the modes, the available modes may be allowed to differ from the 4: 2: 0 scheme in the 4: 4: 4 scheme. [000151] [000151] For example as the luma and chroma PUs are the same size in 4: 4: 4, the chroma PU can benefit from access to all 35 [000152] [000152] Where luma prediction modes are signaled by deriving a list from the most likely modes and sending an index to that list, then if the chroma prediction modes are independent, it may be necessary to derive independent lists from the most likely modes for each channel. [000153] [000153] Finally, in a manner similar to that noted for the case of 4: 2: 2 above, in the 4: 4: 4 scheme, the grinding filter used in the reference sample when predicting the pixel in the sample position can be used for chroma PUs in a similar way to luma PUs. Currently, a low-pass filter [1,2,1] can be applied to reference samples before intraprediction. This is only used for luma TUs when using certain prediction modes. [000154] [000154] One of the intraprediction modes available to chroma TUs is to base the predicted samples on colocalized luma samples. Such an arrangement is illustrated schematically in Figure 19, which shows an arrangement of 1200 TUs (from a region of a source image) represented by small squares in channels Cb, Cr and Y, showing the special alignment between image characteristics (shown schematically) by light and dark boxes 1200) in channels Cb and Y and channels Cr and Y. In this example, it is beneficial to force chroma TUs to base their predicted samples on colocalized luma samples. However, it is not always the case that image characteristics correspond between the three channels. Actually, [000155] [000155] In embodiments of the description, for TUs of Cr, LM Chroma. may optionally be based on colocalized samples from the Cb channel. (or, in other embodiments, the dependency could be the other way around). Such: arrangement is shown schematically in Figure 20. Here, TUs aligned. spatially they are illustrated between channels Cr, Cb and Y. An additional set of TUs labeled "source" is a schematic representation of the color chart as seen as a whole. The image characteristics (a top left triangle and a bottom right triangle) seen in the source image do not actually represent changes in luminance, but only changes in chrominance between the two triangular regions. In this case, basing LM Chroma for Cr on the luminance samples would produce a poor prediction, but basing this on the Cb samples could give a better prediction. [000156] [000156] The decision on which LM Chroma mode to use can be made by controller 343 and / or the 520 mode controller, based on attempted coding of different options (including the option of basing LM Chroma on colocalized luma samples or chroma colocalized), with the decision on which mode to select being made by evaluating a cost function, similar to that described above, with respect to the different test encodings. Examples of the cost function are noise, distortion, error rate or bit rate. One mode among those subject to tentative coding that gives the lowest of any one or more of these cost functions is selected. [000157] [000157] Figure 21 schematically illustrates a method used to obtain reference samples for intraprediction in embodiments of the description. When looking at Figure 21, coding should be kept in mind and is performed according to a scanning pattern, such that in general terms, coded versions of the blocks above and to the left of a current block to be coded are available to the coding process. Sometimes lower-left or upper-right samples are used, if they were previously coded as part of other TUs already encoded within the current LCU. Reference is made to Figure 13 as described above, for example. [000159] [000159] In 4: 2: 0 and 4: 2: 2, the column of pixels immediately to the left of the current TU does not contain samples of colocalized luminance and chrominance because of horizontal subsampling. In other words, this is because the 4: 2: 0 and 4: 2: 2 formats have half as many pixels of chrominance as pixels of luminance (in a horizontal direction), so not every luminance sample position has a sample of chrominance colocalized. Therefore, although luminance samples may be present in the pixel column immediately to the left of the TU, chrominance samples are not present. Therefore, in embodiments of the description, the column located two samples to the left of the current TU is used to provide reference samples for LM Chroma. Note that the situation is different in 4: 4: 4, since the column immediately to the left of the current TU actually contains colocalized luma and chroma samples. This column could therefore be used to provide reference samples. [000160] [000160] Reference samples are used as follows. [000161] [000161] In LM Chroma mode, predicted chroma samples are derived from luma samples reconstructed according to a linear relationship. Thus, in general terms, it can be said that the predicted chrominance values within the TU are given by: Pc = a + bP, where Pc is a chrominance sample value, Pr, is a luminance sample value reconstructed in that position sample, ea and b are constant. The constants are derived for a particular block detecting the relationship between luma and chroma samples reconstructed samples in the row just above that block and in the column just to the left of that block, these being sample positions that have already been coded (see above). '[000162] In embodiments of the description, the constants a and b are - derived as follows:' a = R (Pj ', Pc') / R (P1 ', PU): where R represents a linear regression function (least squares) , and P / 'and Pc' are luminance and chrominance samples respectively from the adjacent row and column as discussed above, and: b = mean (Pc ') - a.mean (Pj') [000163] [000163] For 4: 4: 4, the values of Py 'and Pc' are taken from the column immediately to the left of the current TU, and the row immediately above the current TU. For 4: 2: 2, the values of Pr 'and Pc' are taken from the row immediately above the current TU and the column in the adjacent block which is two sample positions away from the left edge of the current TU. For 4: 2: 0 (which is vertically and horizontally subsampled) the values of P, 'and Pc' would ideally be taken from a row that is two rows above the current TU, but in reality they are taken from a row in the adjacent block that is a sample position above the current TU, and the column in the adjacent block that is two sample positions away from the left edge of the current TU. The reason is to avoid having to maintain an entire additional queue of data in memory. So in this regard, 4: 2: 2 and 4: 2: 0 are treated in a similar way. [000164] [000164] Therefore, these techniques apply to video encoding methods having a chrominance prediction mode in which a current block of chrominance samples representing a region of the image is encoded by deriving and encoding a relation of the chrominance samples with respect to a colocalized block of luminance samples (such as reconstructed luminance samples) representing the same region of the image. The relation (like the linear relation) is derived by comparing (otherwise expressed as correspondingly located) samples of luminance and chrominance from adjacent already encoded blocks. Chrominance samples are derived from luminance samples according to the relationship; and the difference between the predicted chrominance samples and the: current chrominance samples is encoded as residual data. '[000165] Regarding a first sampling resolution (such as: 4: 4: 4), where chrominance samples have the same sampling rate as luminance samples, colocalized samples are samples at adjacent sample positions to the current block. [000166] [000166] In relation to a second sampling resolution (such as 4: 2: 2 or 4: 2: 0), where chrominance samples have a lower sampling rate than that of luminance samples, a column more next or row of colocalized luminance and chrominance samples from the adjacent already encoded block is used to provide the colocalized samples. Or where, in the case of the second sampling resolution being a 4: 2: 0 sampling resolution, the correspondingly located samples are a row of samples adjacent to the current block and a nearest column or row of correspondingly located luminance and chrominance samples, adjacent blocks already encoded. [000167] [000167] Figure 22 schematically illustrates the prediction angles available for luma samples. The current pixel being predicted as shown in the center of the diagram as a 1220 pixel. The smaller dots 1230 represent adjacent pixels. Those located on the top or left sides of the current pixel are available as reference samples to generate a prediction, because they have been previously coded. Other pixels are currently unknown (at the time of predicting the 1220 pixel) and will be predicted in due time themselves. [000168] [000168] Each numbered prediction direction points to 1230 reference samples from within a group of candidate reference samples at the top or left edges of the current block that are used to generate the current predicted pixel. In the case of smaller blocks, where the prediction directions: point to places between reference samples, a linear interpolation. between adjacent reference samples (either side of the sample position 'pointed in the direction indicated by the current prediction mode) is used. [000170] [000170] However, for chroma samples in 4: 2: 2, it can be considered counterintuitive to use the same prediction and direction algorithm as luma when DM CHROMA is selected, since chroma blocks now have a different aspect ratio that of the luma blocks. For example, a 45º line for a square luma sample arrangement should still map to a 45º line for chroma samples, albeit with a rectangular sample arrangement. Overlapping the rectangular grid over a square grid indicates that the 45º line would then actually map to a 26.6º line. [000171] [000171] Figure 23 schematically illustrates luma intraprediction directions as applied to chroma pixels in 4: 2: 2, relative to a current pixel to be predicted 1220. Note that there are half as many pixels horizontally as there are vertically, because 4 : 2: 2 have half the horizontal sample rate in the chroma channel when compared to the luma channel. [000172] [000172] Figure 24 schematically illustrates the transformation or mapping of 4: 2: 2 chroma pixels to a square grid, and subsequently how this transformation changes the prediction directions. [000173] [000173] The luma prediction directions are shown as broken lines 1240. The chroma pixels 1250 are remapped to a square grid giving a rectangular arrangement half the width of the 1260 array. corresponding luma (such as the one shown in Figure 22). The prediction directions shown in Figure 23 have been remapped to the rectangular arrangement. . It can be seen that for some pairs of directions (one pair being a light direction and a chroma direction) there is either an overlap or an intimate relationship. For example, direction 2 in the luma arrangement substantially covers direction 6 in the chroma arrangement. However, it will also be noted that some luma directions, approximately half of them, have no corresponding chroma directions. An example is the numbered luma direction 3. Also, some chroma directions (2-5) have no equivalent in the luma arrangement, and some luma directions (31-34) have no equivalent in the chroma arrangement. But in general, the overlap as shown in Figure 24 demonstrates that it would be inappropriate to use the same angle for both the luma and chroma channels. [000174] [000174] Therefore, in order to derive the appropriate chroma prediction angle when (a) DM CHROMA is selected and (b) the DM CHROMA mode currently in use indicates that the chroma prediction direction should be that of the block of colocalized luma, the following procedure is applied: (1) derive the intraprediction angle step and its inverse according to the direction of luma and according to usual HEVC rules; (ii) if the luma direction is predominantly vertical (that is, for example, a numbered mode from 18 to 34 inclusive), then the intrapredition angle step is divided in half and its inverse is doubled; (li) otherwise, if the luma direction is predominantly horizontal (that is, for example, a mode numbered from 2 to 17 inclusive), then the intrapredition angle step is doubled and its inverse divided in half. [000175] [000175] Consequently, these embodiments relate to methods: encoding and decoding video, apparatus or programs in which. luminance and chrominance samples are predicted from other respective reference samples according to a prediction direction associated with + a sample to be predicted. In modes such as 4: 2: 2, chrominance samples have a lower horizontal and / or vertical sampling rate than luminance samples, so the ratio of horizontal luminance resolution to horizontal chrominance resolution is different for vertical luminance resolution for vertical chrominance resolution. In summary, this means that a block of luminance samples has a different aspect ratio to a corresponding block of chrominance samples. [000176] [000176] The intraframe predictor 530, for example, is operable to detect a first prediction direction defined in relation to a grid of a first aspect ratio with respect to a set of current samples to be predicted; and apply a direction mapping to the prediction direction in order to generate a second prediction direction defined in relation to a sample grid of a different aspect ratio from the same set of current samples to be predicted. [000177] [000177] In embodiments, the first prediction direction is defined with respect to luminance or chrominance samples, and the second prediction direction is defined with respect to the other of the luminance and chrominance samples. In the particular examples discussed in the present description, the luminance prediction direction can be modified to provide the chrominance prediction direction. But, the other way could be used. [000178] [000178] The technique is particularly applicable to intraprediction, so that the reference samples are samples from the same respective image as the samples to be predicted. [000179] [000179] In at least some arrangements, the first prediction direction is defined with respect to a square block of luminance samples' including the current luminance sample; and the second direction of prediction is. defined with respect to a rectangular block of chrominance samples including the current chrominance sample. [000181] [000181] The video data can be in a 4: 2: 2 format or a 4: 4: 4 format, for example. [000182] [000182] In general terms, embodiments of the description can provide independent predictive modes for the chrominance components (for example, for each of the luminance and chrominance components separately). These embodiments relate to video encoding methods in which samples of luminance and chrominance of an image are predicted from other respective reference samples of the same image according to a prediction direction associated with a sample to be predicted, the samples of chrominance having a lower horizontal and / or vertical sample rate than luminance samples so that the horizontal luminance resolution ratio for horizontal chrominance resolution is different for the vertical luminance resolution ratio for vertical chrominance resolution of so that a block of luminance samples has a different aspect ratio to a corresponding block of chrominance samples, and the chrominance samples representing first and second chrominance components. [000183] [000183] The intraframe mode selector 520 selects a prediction mode by defining a selection of one or more reference samples for. predict a current chrominance sample of the first component of A chrominance (such as Cb). It also selects a different 'prediction mode' by defining a different selection of one or more samples from: reference to predict a current chrominance sample from the second chrominance component (such as Cr), colocalized with the current chrominance sample from the first chrominance component . [000184] [000184] A reference sample filter can optionally be applied to horizontal samples or vertical samples (or both). As discussed above, the filter can be a standardized 3-lead filter "1 2 1", currently applied to all luma reference samples except the left bottom and right top (samples from an NxN block are joined to form one single 1D array of size 2N + 1, and then optionally filtered). In embodiments of the description, only the first (left edge) or last (top edge) N + 1 chroma samples for 4: 2: 2 are applied, but noting that the left bottom, right top and left top would not be adjusted then ; or all chroma samples (as for luma), for 4: 2: 2 and 4: 4: 4. [000185] [000185] Embodiments may also provide methods for encoding or decoding video, apparatus or programs in which luminance and first and second chrominance component samples are predicted from other respective reference samples according to a prediction direction associated with a sample to be predicted, involving predicting samples of the second chrominance component from samples of the first chrominance component. [000186] [000186] Embodiments may also provide methods for encoding or decoding video, apparatus or programs in which luminance and first and second samples of chrominance component are predicted from other respective reference samples according to a prediction direction associated with a sample to be predicted, involving filtering the * reference samples. [000188] [000188] Note that modes 0 and 1 are not angular prediction modes and are therefore not included in this procedure. The effect of the procedure shown above is to map the chroma prediction directions over the luma prediction directions in Figure 24. [000189] [000189] For 4: 2: 0, when either a purely horizontal prediction mode (luma mode 10) or a purely vertical prediction mode (luma mode 26) is selected, the top or left edges of the predicted TU are subject to filtering for the luma channel only. For the horizontal prediction mode, the top row is filtered in the vertical direction. For the vertical prediction mode, the left column is filtered in the horizontal direction. [000190] [000190] Filtering a column of samples in the horizontal direction can be understood as applying a horizontally oriented filter to each sample in turn of the sample column. Thus, for an individual sample, its value will be modified by the action of the filter, based on a filtered value generated from the current value of that sample and from one or more other samples to sample positions displaced from that sample in a horizontal direction (that is, a or more other samples to the left and / or right of the sample in question). [000191] [000191] Filtering a sample row in the vertical direction can be understood as applying a vertically oriented filter to each sample in turn of the sample row. Thus, for an individual sample, its value will be modified by the action of the filter, based on a filtered value generated from the current value of that sample and from one or more other samples to sample positions displaced from that sample in a vertical direction (that is, a or more other samples above and / or below the sample in question). : [000192] A purpose of the edge pixel filtering process. described above is to aim to reduce block-based edge effects in the 'prediction, thereby aiming to reduce energy in the residual image data'. [000193] [000193] In embodiments, a corresponding filtering process is also provided for chroma TUs in 4: 4: 4 and 4: 2: 2. Taking into account the horizontal subsampling, a proposal is to only filter the top row of the chroma TU in 4: 2: 2, but filter both the top row and the left column (as appropriate, according to the selected mode) in 4 : 4: 4. It is considered appropriate to filter only in these regions in order to avoid filtering out too useful detail, which (if filtered out) would lead to increased energy from the residual data. [000194] [000194] For 4: 2: 0, when CC mode is selected, one or both of the top and / or left edges of the predicted TU are subject to filtering for the luma channel only. Here, this is an example of a case where the luminance samples represent a luminance component and the respective chrominance samples represent two chrominance components, the filtering step is applied to a subset of the three components. The subset can consist of the luminance component. Filtering may involve filtering one or both of the left column of samples in the predicted sample block and the top row of samples in the predicted sample block. [000195] [000195] The filtration can be such that in DC mode, the filter performs an average operation (1x neighboring exterior sample + 3 * edge sample) / 4 for all samples on both edges. However, for the top left, the filter function is (2x current sample + 1x upper sample + 1x left sample) / 4. This is an example of an operation in which in a CC mode in which a predicted sample is generated as a simple arithmetic mean of surrounding samples, the filtering step comprises filtering the left column of samples in the predicted sample block and filtering the top row of “samples in the predicted sample block. [000196] [000196] The H / V filter is an average between neighboring and outer sample. edge sample. [000197] [000197] In some embodiments, this filtering process is also provided for chroma TUs in 4: 4: 4 and 4: 2: 2. Again, taking into account the horizontal subsampling, in some embodiments, only the top row of chroma samples is filtered to 4: 2: 2, but the top row and left column of the chroma TU are filtered to 4: 4: 4. [000198] [000198] Therefore, this technique can apply in relation to a method of encoding or decoding video, apparatus or program in which samples of luminance and chrominance in (for example) a 4: 4: 4 format or a 4: 2 format : 2 are predicted from other respective samples according to a prediction direction associated with blocks of samples to be predicted. [000199] [000199] In embodiments of the technique, a prediction direction is detected in relation to a current block to be predicted. A predicted block of chrominance samples is generated according to other chrominance samples defined by the prediction direction. If the prediction direction detected is substantially vertical (for example, being within +/- n angle modes of precisely vertical mode, where n is (for example) 2), the left column of samples is filtered (for example, in a horizontal direction using a horizontally oriented filter) in the predicted block of chrominance samples. Or, if the prediction direction detected is substantially horizontal (for example, being within +/- n angle modes of precisely horizontal mode, where n is (for example) 2), the top row of samples is filtered (for example, in a vertical direction using a vertically oriented filter) in the predicted block of chrominance samples. In each case, the operation can apply only to the left column or the top column, respectively. So, the difference between the chrominance block. predicted filtered and the current chrominance block is encoded, for example as residual data. Alternatively, the test could be for a precisely vertical or horizontal mode rather than a substantially vertical or horizontal mode. The tolerance of +/- n could be applied to one of the tests (vertical or horizontal), but not to the other. In embodiments of the description, only the left column or the top row of the predicted block can be filtered, and the filtering can be performed by a horizontally oriented filter or a vertically oriented filter, respectively. [000200] [000200] The filtering can be performed by the respective predictors 520, 530, acting as a filter in this regard. [000201] [000201] After the filtering process, embodiments of the technique either encode a difference between the filtered predicted chrominance block and the current chrominance block (to an encoder) or apply a decoded difference to the filtered predicted chrominance block in order to encode the block (to a decoder). Inter-prediction [000202] [000202] It is noted that HEVC interpretation already allows rectangular PUs, so 4: 2: 2 and 4: 4: 4 modes are already compatible with PU interpretation processing. [000203] [000203] Each frame of a video image is a discrete sampling of a real scene, and as a result each pixel is a gradual approximation of a real world gradient in color and brightness. [000204] [000204] In recognition of this, when predicting the value of Y, Cb or Cr of a pixel in a new video frame from a value in a previous video frame, the pixels in that previous video frame are interpolated to create a [000206] [000206] For example for the luma PU 4: 2: 0 8x8, interpolation is' 4 pixel, so an 8-lead x4 filter is applied horizontally first, and then the same 8-lead x4 filter is applied vertically, so that the PU luma is effectively stretched 4 times in each direction, to form an interpolated arrangement 1320 as shown in Figure [000207] [000207] A similar arrangement for 4: 2: 2 will now be described with reference to Figures 27 and 28, which illustrate a luma PU 1350 and a pair of corresponding chroma PU 1360. [000208] [000208] Referring to Figure 28, as previously noted, in the 4: 2: 2 scheme, the chroma PU 1360 can be non-square, and in the case of a 4: 2: 2 8x8 luma PU, it will typically be a Chroma PU 4 wide x 8 high 4: 2: 2 for each of the channels Cb and Cr. Note that the chroma PU is designed, for the purposes of Figure 28, as a square shape arrangement of non-square pixels, but in general terms, it is noted that: PUs 1360 are 4 (horizontal) x 8 pixel arrangements ( vertical). [000210] [000210] Consequently, Figure 27 shows that the 4: 2: 2 8x8 1350 luma PU interpolated as before with an 8-lead filter x4, and the 4: 2: 2 4x8 1360 chroma PU interpolated with the chroma filter 4 x8 leads existing in the horizontal and vertical direction, but only with the even fractional results used to form the interpolated image in the vertical direction. [000211] [000211] These techniques are applicable to methods of encoding or decoding video, apparatus or programs using interimage prediction to encode input video data in which each chrominance component has 1 / M-th of the horizontal resolution of the luminance component and 1 / N -th of the vertical resolution of the luminance component, where M and N are integers equal to 1 or more. For example, for 4: 2: 2, M = 2, N = 1. For 4: 2: 0, M = 2, N = 2. [000212] [000212] The 570 frame storage is operable to store one or more images preceding a current image. [000213] [000213] The interpolation filter 580 is operable to interpolate a higher resolution version of prediction units of the stored images so that the luminance component of an interpolated prediction unit has a horizontal resolution P times that of the corresponding portion of the image stored and a vertical resolution Q times that of the corresponding portion of the stored image, where P and Q are integers greater than 1. In the current examples, P = Q = 4 so that the interpolation filter 580 is operable to generate an interpolated image at 4 'sample resolution. [000215] [000215] The motion compensated predictor 540 is operable to generate a motion compensated prediction of the current image prediction unit with respect to an area of an interpolated stored image pointed to by a respective motion vector. [000216] [000216] Returning to a discussion of the operation of the interpolation filter 580, embodiments of this filter are operable to apply a horizontal xR and vertical xS interpolation filter to the chrominance components of a stored image to generate an interpolated chrominance prediction unit, where R is equal to (Ux Mx P) and S is equal to (V x N x Q), U and V being integers equal to | or more; and to subsample the interpolated chrominance prediction unit, such that its horizontal resolution is divided by a U factor and its vertical resolution is divided by a V factor, thereby resulting in a block of MP x NQ samples. [000217] [000217] Thus, in the case of 4: 2: 2, the interpolation filter 580 applies an x8 interpolation in the horizontal and vertical directions, but then vertically subsamples by a factor of 2, for example using the entire 2nd sample in the interpolated output. One way to achieve this is to double an index value in the sample array. So, consider an array direction (such as the vertical direction in this example) in the sample array which is 8 samples across, indexed as 0.7. A sample required in the subsampled range is indexed in the range of 0.3. Thus, doubling this index will give values of 0.2.4 and 6 that can then be used to access samples in the original arrangement such that every alternate sample is used. This is an example of selecting a subset of samples from the interpolated chrominance prediction unit. [000219] [000219] In embodiments, as discussed, the interpolated chrominance prediction unit has a sample height twice that of a 4: 2: 0 format prediction unit interpolated using the same xR and xS interpolation filters. [000220] [000220] The need to provide different filters can be avoided or alleviated using these techniques, and in particular using the same horizontal xR and vertical xS interpolation filters, in relation to 4: 2: 0 input video data and video data input 4: 2: 2. [000221] [000221] As discussed, in the subsampling stage, the interpolated chrominance prediction unit comprises using the entire V-th sample of the interpolated chrominance prediction unit in the vertical direction, and / or using the entire U-th sample of the interpolated chrominance prediction unit. chrominance interpolated in the vertical direction. More generally, the subsampling may comprise selecting a subset of samples from the interpolated chrominance prediction unit. [000222] [000222] Achievements may involve deriving a luminance motion vector for a prediction unit; and independently deriving one or more chrominance motion vectors for that prediction unit. [000223] [000223] In some embodiments, at least one of R and S is equal to 2 or more, in some embodiments, the horizontal interpolation filters xR and vertical xS are also applied to the luminance components of the stored image. Variants of interpretation 4: 4: 4 [000224] [000224] By extension, the same principle of only using the results: fractional pairs for the existing 4-lead chroma filter x8 can be: applied both vertically and horizontally for the 8x8 '4: 4: 4 chroma PUs. - [000225] In addition to these examples, the x8 chroma filter can be used for all interpolation, including luma. Additional interpretation variants [000226] [000226] In a motion vector derivation (MV) implementation, one vector is produced for a PU on a slice P (and two vectors for a PU on a slice B (where a slice P takes predictions from a preceding frame, and a B slice takes predictions from a preceding and following frame, in a similar way to MPEG P and B frames). Notably, in this implementation in the 4: 2: 0 scheme, the vectors are common to all channels, and furthermore , chroma data does not need to be used to calculate the motion vectors. In other words, all channels use a motion vector based on the luma data. [000227] [000227] In one embodiment, in the 4: 2: 2 scheme, the chroma vector could be derived so as to be independent of luma (ie, a single vector for channels Cb and Cr could be derived separately), and in 4: 4: 4 scheme, chroma vectors could be additionally independent for each of the Cb and Cr channels. Transformed [000228] [000228] In HEVC, most images are encoded as motion vectors of previously encoded / decoded frames, with motion vectors telling the decoder where, in these other decoded frames, copy good approximations of the current image. The result is an approximate version of the current image. HEVC then encodes the so-called residue, which is the error between this approximate version and the correct image. This residue requires much less information than specifying the actual image directly. However, it is still generally preferable to compress this residual information to further reduce the overall bit rate. [000230] [000230] The spatial frequency transforms used in HEVC are conventionally those that generate coefficients in powers of 4 (for example 64 frequency coefficients) as this is particularly amenable to common quantization / compression methods. The square TUs in the 4: 2: 0 scheme are all powers of 4 and therefore this is straightforward to achieve. [000231] [000231] If the NSQT options are enabled, some non-square transforms are available for non-square TUs, such as 4x16, but again notably these result in 64 coefficients, that is, again a power of 4. [000232] [000232] The 4: 2: 2 scheme can result in non-square TUs that are not powers of 4; for example, a 4x8 TU has 32 pixels, and 32 is not a power of 4. [000233] [000233] In one embodiment, therefore, a non-square transform for a non-power of number 4 coefficients can be used, recognizing that modifications may be required in the subsequent quantization process. [000234] [000234] Alternatively, in one embodiment, non-square TUs are divided into square blocks having a power area of 4 for transformation, and then the resulting coefficients can be interleaved. [000235] [000235] For example, for 4x8 blocks, odd / even vertical samples can be divided into two square blocks. 'Alternatively, for 4x8 blocks, the top pixels 4x4 and the bottom pixels. 4x4 could form two square blocks. Alternatively again, for 4x8 blocks, a Haar 'wavelet' decomposition can be used for. form a 4x4 block of lower and higher frequency. [000236] [000236] Any of these options can be made available, and the selection of a particular alternative can be signaled or derived by the decoder. Other transform modes [000237] [000237] In the 4: 2: 0 scheme, there is a proposed flag (the so-called 'apprime y zero transquant bypass flag') allowing residual data to be included in the lossless bit stream (that is, without being transformed, quantized or additionally filtered). In the 4: 2: 0 scheme, the flag applies to all channels. [000238] [000238] Consequently, such embodiments represent a method of encoding or decoding video, apparatus or program in which luminance and chrominance samples are predicted and the difference between the samples and the respective predicted samples is coded, using a configured indicator to indicate whether luminance difference data is to be included in a lossless output bit stream; and independently indicate whether chrominance difference data is to be included in the lossless bit stream. Such a flag or flags (or indicator or indicators respectively) can be inserted by controller 343, for example. [000239] [000239] In one embodiment, it is proposed that the flag for the luma channel be separated for the chroma channels. Consequently for the 4: 2: 2 scheme, such flags should be provided separately for the light channel and chroma channels, and for the 4: 4: 4 scheme, such flags should be provided both separately for the light channels. and chroma, or a flag is provided for each of the three channels. This: recognizes the increased chroma data rates associated with. schemes 4: 2: 2 and 4: 4: 4, and enables, for example, lossless luma data along with compressed chroma data. [000241] [000241] In video encoding methods, the various embodiments can be arranged in order to indicate whether luminance difference data is to be included in a lossless output bit stream; and independently indicating whether chrominance difference data is to be included in the lossless bit stream, and to encode or include the relevant data in the manner defined by such indications. Quantization [000242] [000242] In the 4: 2: 0 scheme, the quantization calculation is the same for chrominance as for luminance. Only the quantization parameters (QPs) differ. [000243] [000243] QPs for chrominance are calculated from the luminance QPs as follows: Qpcv = Gradation table [Qpluminance + chroma qp index offset] Qpcr = Graduation table [Qpiuminance + Second chroma qp index offset] [000244] [000244] Note that "chroma qp index offset" AND "second chroma qp index offset" can instead be referred to as cb qp offset and qp offset, respectively. [000245] [000245] Chrominance channels typically contain less information than luminance and therefore have lower magnitude coefficients; this limitation in the chrominance QP can prevent all details of chrominance from being lost at heavy quantization levels. [000246] [000246] The QP divider ratio at 4: 2: 0 is a logarithmic one such that a 6 increase in QP is equivalent to a doubling of the divisor (the quantization step size discussed elsewhere in this description, however noting which can be further modified by Q matrices before use). Consequently, the largest difference in the graduation table of 51-39 = 12 represents a change of factor of 4 in the divisor. [000247] [000247] However, in an embodiment of the present description, for the 4: 2: 2 scheme, which contains potentially twice as much chroma information as the 4: 2: 0 scheme, the maximum chrominance QP value in the graduation table can be raised to 45 (that is, dividing the divider in half). Similarly for the 4: 4: 4 scheme, the maximum chrominance QP value in the graduation table can be raised to 51 (ie, the same divisor). In this case, the graduation table is in redundant effect, but it can be retained simply for operational efficiency (that is, so that the system works by reference to a table in the same way for each scheme). Consequently, more generally in one embodiment of the present description, the chroma QP divider is modified responsive to the amount of information in the coding scheme relative to the scheme: 4: 2: 0. - [000248] Consequently, embodiments apply to an operable video encoding or decoding method for quantizing à blocks. frequency-converted luminance and chrominance component video data in a 4: 4: 4 or 4: 2: 2 format according to a selected quantization parameter that defines a quantization step size. A quantization parameter association (such as, for example, the appropriate table in Figure 29a or 29b) is defined between luminance and chrominance quantization parameters, where the association is such that a maximum chrominance quantization step size is less than a maximum luminance quantization step size for the 4: 2: 2 format (eg 45), but equal to the maximum luminance quantization step size for the 4: 4: 4 format (eg 51 ). The quantization process operates since each component of the frequency-transformed data is divided by a respective value derived from the respective quantization step size, and the result is rounded up to an integer, to generate a corresponding block of quantized spatial frequency data . [000249] [000249] It will be appreciated that the dividing and rounding steps are indicative examples of a generic quantization phase, according to the respective quantization step size (or data derived from this, for example by applying Q matrices). [000250] [000250] Embodiments include the step of selecting a quantization parameter or index (QP for luminance) to quantize the spatial frequency coefficients, the quantization parameter acting as a reference to a respective of a set of quantization step sizes accordingly with the QP tables applicable to luminance data. [000252] [000252] In embodiments, as discussed above, a maximum luminance quantization parameter is 51; a maximum chrominance quantization parameter is 45 for the 4: 2: 2 format; and a maximum chrominance quantization parameter is 51 for the 4: 4: 4 format. [000253] [000253] In embodiments, the first and second Offsets can be communicated in association with the encoded video data. [000254] [000254] At 4: 2: 0, the matrices of transform A are initially created (by transform unit 340) of these from a DCT A 'NxXN normalized using: A, = intlí4x / N x A',) where iej indicate a position within the matrix. This graduation with respect to a normalized transform matrix provides an increase in precision, avoids the need for fractional calculations and increases internal precision. [000255] [000255] Ignoring differences due to Aij rounding, since X is multiplied by both A and A "(the transposition of matrix A), the resulting coefficients differ from those of a true normalized DCT MxN (M = height; N = width) by a common graduation factor of: (64x / N) 64x VM) = 4096 / N JM [000256] [000256] Note that the common graduation factor could be different for this example. Also note that matrix multiplication by both A and [000258] [000258] To reduce the requirement for internal bit precision, the coefficients are shifted to the right (using positive rounding) twice during the transform process: Offset! = log, (N) + Depth debits - 9 Displacement2 = log, (M) +6 [000259] [000259] As a result, the coefficients when they leave the front transform process and enter the quantizer are effectively shifted to the left by: | Displacement torsult = (12 + 0.510g, (NM)) - (displ! + Displ2) = (12 + 0.510g, (N) + 0.510g, (M)) - (log, (N) + profdebits —9 + log, (M) +6) = 15 - (0.510g, (N) + 0.510g, (M) + profdebits) [000260] [000260] At 4: 2: 0, the frequency-separated coefficients (for example, DCT) generated by the frequency transform are a factor of (2PeslocamentoResultantey) greater than those that a normalized DCT would produce. [000261] [000261] In some embodiments, the blocks are either square or rectangular with a 2: 1 aspect ratio. Therefore, for a block size of N x M, both: N = M, in which case, resulting displacement is an integer and S = N = M = sqrt (NM); or [000264] [000264] This bit offset operation is possible because the result offset is an integer. [000265] [000265] Also note that the divisor QP ratio (parameter or quantization index) follows a base 2 power curve, as mentioned above, in which an increase in QP by 6 has the effect of doubling the divisor while an increase in QP by 3 it has the effect of increasing the divisor by a factor of sqrt (2) (square root of 2). [000266] [000266] Due to the chroma format in 4: 2: 2, there are more width: height (N: M) ratios of TU: N = M (from before) where S = N = M = sqrt (NM) (resulting displacement is an integer) 0.5N = 2M and 2N = 0.5M, (from before), = where S = sqgrt (NM) (Result offset is an integer) N = 2M where S = sqgrt (NM) 2M = N where S = sqrt (NM) 4N = 0.5SM where S = sqrt (NM) Displacements result = 15 - (log, (S) + profdebits) [000267] [000267] In these three subsequent situations, the resulting displacement is not an integer. For example, this can apply where at least some of the sample blocks of video data comprise MxN samples, where the square root of N / M does not equal an integer power of 2. Such block sizes may occur in relation to chroma samples in some of the. present embodiments. [000268] [000268] Therefore, in such cases, the following techniques are 'pertinent, that is, in methods of encoding or decoding video, apparatus or operable programs to generate blocks of quantized spatial frequency data by performing frequency transformation in blocks of samples. of video data using a transform matrix comprising an array of integer values that are each graduated with respect to respective values of a transform matrix normalized by an amount dependent on a dimension of the transform matrix, and to quantize the frequency data according to a selected quantization step size, the step of transforming a block of video data samples into frequency by multiplying the block by the transform matrix by matrix and the transposition of the transform matrix to generate a block of coefficients of graduated spatial frequency which are each higher, by a common graduation factor (for example, Result displacement), than the spatial frequency coefficients that would result from a transformation into normalized frequency of that block of video data samples. [000269] [000269] Therefore, in the quantization phase, an appropriate bit shift operation cannot be used to cancel the operation in a simple way. [000270] [000270] A solution for this is proposed as follows: [000271] [000271] In the quantizer phase, apply a right shift: quantdesloctranformdireito = 15 - log (S ') - profunddebits [000272] [000272] Where the value S 'is derived such that: [000273] [000273] The difference between displacements of 1st is equivalent to. multiplication by sqrt (2), that is, at this moment the coefficients are sqart (2). times greater than they should be, making the bit offset an entire bit offset. [000275] [000275] Therefore, these steps can be summarized (in the context of a video encoding or decoding method (or corresponding apparatus or program) operable to generate quantized spatial frequency data blocks by performing frequency transformation into data sample blocks. video using a transform matrix comprising an array of integer values that are each graduated with respect to respective values of a normalized transform matrix, and to quantize spatial frequency data according to a selected quantization step size, involving frequencyize a block of video data samples by multiplying the block by the transform matrix by matrix and the transposition of the transform matrix to generate a block of graduated spatial frequency coefficients that are each larger, by a common graduation factor, than the spatial frequency coefficients that would result from a transformation into normalized frequency of that block of video data samples) as follows: select a quantization step size to quantize the spatial frequency coefficients; apply a n-bit offset (for example, quantDeslocTranformDirect) to divide each of the graduated spatial frequency coefficients by a factor of 2 ", where n is an integer; and detect a residual graduation factor (for example, Result displacement - quantDeslocTranformDirect) , with the common graduation factor divided by 2 ". For example, in the situation discussed above, à. the quantization step size is then in accordance with the factor of. residual graduation to generate a modified quantization step size; and each of the graded spatial frequency coefficients in the - block is divided by a value dependent on the modified quantization step size and rounding the result to an integer, to generate the quantized spatial frequency data block. As discussed, the modification of the quantization step size can be performed simply by adding a Compensation to QP in order to select a different quantization step size when QP is mapped in the quantization step size table. [000276] [000276] The coefficients are now of the correct magnitude for the original QP. [000277] [000277] The transform matrix can comprise an array of integer values that are each graduated with respect to respective values of a transform matrix normalized by an amount dependent on a dimension of the transform matrix. [000278] [000278] It follows that the required value for S 'can always be derived as follows: S' = sart (2 * M * N) [000279] [000279] As an alternative proposal, S 'could be derived such that: displacement result - quantDeslocTranformDireito = - [000280] [000280] In this case, S '= sart (14 * M * N)) and the applied quantization parameter is (QP - 3). [000281] [000281] In any of these cases, (adding 3 to QP or subtracting 3 from QP), the step of selecting the quantization step size comprises selecting a quantization index (for example, QP), the quantization index defining a respective entry in a quantization step size table, and the modification step comprises changing the. quantization index in order to select a step size of. different quantization, such that the ratio of the quantization step size i to the originally selected quantization step size is substantially equal to the residual scaling factor. [000282] [000282] This works particularly well where, as in the present embodiments, successive values of the quantization step sizes in the table are logarithmically related, so that a change in the quantization index (eg QP) of m (where m is a represents a change in size of quantization step by a factor of p (where p is an integer greater than 1). In the present embodiments, m = 6ep = 2, so that an increase of 6 in QP represents a doubling of the applied quantization step size, and a decrease in QP of 6 represents a division of the resulting quantization step size. [000283] [000283] As discussed above, the modification can be performed by selecting a quantization index (for example, a basic QP) in relation to luminance samples; generating a quantization index compensation, relative to the quantization index selected for the luminance samples, for samples of each or both of the chrominance components; changing the Quantization index compensation according to the residual graduation factor; and communicating the Quantization Index Compensation in association with the encoded video data. In HEVC embodiments, QP offsets for the two chroma channels are sent in the bit stream. These steps correspond to a system in which the QP Compensation (to account for the residual graduation factor) of +/- 3 could be incorporated into these Compensations, or they could be increased / decreased when they are used to derive the QP from chroma. [000284] [000284] Note that QP compensation does not have to be +/- 3 if differently formed blocks were used; it's just that +/- 3 represents: a Compensation applicable to block shapes and aspect ratios, discussed above in relation to 4: 2: 2 video, for example. [000285] [000285] In some embodiments, n (the bit offset as. Applied) is selected so that 2 "is greater than or equal to the common graduation factor. In other embodiments, n is selected so that 2" is less than or equal to the common graduation factor. In embodiments of the description (using any of these arrangements), an offset of bit n can be selected so as to be the next closest (in any direction) to the common graduation factor, so that the residual graduation factor represents a factor having a magnitude of less than 2. [000286] [000286] In other embodiments, modification of the quantization step size can be performed simply by multiplying the quantization step size by a factor dependent on the residual graduation factor. That is, the need for modification does not involve modifying the index QP. [000287] [000287] Also note that the quantization step size as discussed is not necessarily the actual quantization step size by which a transformed sample is divided. The quantization step size derived in this way can be further modified. For example, in some arrangements, the quantization step size is further modified by respective entries in a value matrix (Qmatrix) so that different final quantization step sizes are used at different coefficient positions in a quantized block of coefficients . [000288] [000288] It is also noted that in the 4: 2: 0 scheme, the largest chroma TU is 16x16, while for the 4: 2: 2 scheme, 16x32 TUs are possible, and for the 4: 4: 4 scheme, TUs of chroma 32x32 are possible. Consequently, in one embodiment of the present description, quantization matrices (Q matrices) for 32x32 chroma TUs are proposed. Similarly,. Q matrices should be defined for non-square TUs such as the TU. 16x32, with one embodiment being the subsampling of a larger square matrix of Q À. - [000289] Q matrices could be defined by any of the following: values in a grid (as for 4x4 and 8x8Q matrices); spatially interpolated from smaller or larger arrays; - in HEVC, larger Q matrices can be derived from respective groups of coefficients of the smaller reference ones, or smaller matrices can be sub-sampled from larger matrices. Note that this interpolation or subsampling can be performed within a channel relationship - for example, a larger matrix for a channel relationship can be interpolated from a smaller one for that channel relationship. [000290] [000290] Taking a small example for illustrative purposes only, a particular matrix for a channel ratio could be defined, such as a 4 x 4 matrix in relation to 4: 2: 0 (ab) (cd) where a, b, c and d are respective coefficients. This acts as a reference matrix. [000291] [000291] Embodiments of the description could then define a set of difference values for a matrix of similar size in relation to another channel relation: [000293] [000293] As a function of another Qmatrix; - for example, a graduation relation relative to another matrix (so that each of a, b, c and d in the previous example is multiplied by the same factor, or has the same difference added to it). This reduces the data requirements for transmitting the difference or factor data. [000294] [000294] Note that MatrizesQ can be called Graduation Lists within the HEVC environment. In embodiments where quantization is applied after the scanning process, the scanned data can be one. linear flow of successive data samples. In such cases, the concept of a Qmatrix still applies, but the matrix (or Scan List) can be 'considered as a 1xN matrix, such that the order of the N data values - within the 1xN matrix corresponds to the order of samples scans to which the respective Qmatrix value is to be applied. In other words, there is a 1: 1 relationship between data order in the scanned data, spatial frequency according to the scan pattern, and data order in Qmatrix 1xN. [000295] [000295] Note that it is possible, in some implementations, to deviate or omit the DCT (frequency separation) phase, but retain the quantization phase. [000296] [000296] Other useful information includes an optional indicator to which other matrix values are related, that is, the previous channel or the first (primary) channel; for example, the matrix for Cr could be a graduated factor of a matrix for Y, or for Cb, as indicated. [000297] [000297] Accordingly, embodiments of the description may provide a video encoding or decoding method (and a corresponding apparatus or computer program) operable to generate blocks of quantized spatial frequency data (optionally) performing frequency transformation in sample blocks of video data and quantizing video data (such as spatial frequency data) according to a selected quantization step size and a data matrix modifying the quantization step size for use at different respective block positions within of an ordered block of samples (such as an ordered block of samples transformed into frequency), the method being operable with respect to at least two different chrominance subsampling formats. [000298] [000298] For at least one of the chrominance subsampling formats, one or more quantization matrices are defined as one or more predetermined modifications with respect to one or more reference quantization matrices * defined for the reference one of the subsampling formats chrominance. ] [000299] In embodiments of the description, the definition step - comprises defining one or more quantization matrices as a matrix of values, each interpolated from a respective plurality of values from a reference quantization matrix. In other embodiments, the definition step comprises defining one or more quantization matrices as a matrix of values, each subsampling of values from a reference quantization matrix. [000300] [000300] In embodiments of the description, the definition step comprises defining one or more quantization matrices as a difference matrix with respect to corresponding values of a reference quantization matrix. [000301] [000301] In embodiments of the description, the definition step comprises defining one or more quantization matrices as a predetermined function of values of a reference quantization matrix. In such cases, the predetermined function can be a polynomial function. [000302] [000302] In embodiments of the description, one or both of the following is provided, for example as part of or in association with the encoded video data: (i) reference indicator data to indicate, with respect to encoded video data, the reference quantization matrix; and (11) modification indicator data to indicate, with respect to encoded data values, one or more predetermined modifications. [000303] [000303] These techniques are particularly applicable where two of the chrominance subsampling formats are 4: 4: 4 and 4: 2: 2 formats. [000304] [000304] The number of Q Matrices in HEVC 4: 2: 0 is currently 6 for each transform size: 3 for the corresponding channels, and one set for intra and inter. In the case of a 4: 4: 4 GBR scheme, it will be appreciated that either one set of quantization matrices could be used for all channels, or three sets of respective quantization matrices could be used. '[000305] In embodiments of the description, at least one of the matrices. is a 1xN matrix. This would be the case in (as described here) one or more of the matrices is actually a Graduation List or similar, being an ordered 1xN linear arrangement of coefficients. [000306] [000306] The proposed solutions involve increasing or decreasing the applied QP. However, this could be achieved in several ways: [000307] [000307] In HEVC, QP compensation for the two chroma channels is sent in the bit stream. The +/- 3 could be incorporated into these Offsets, or they could be increased / decreased when they are used to derive the chroma QP. [000308] [000308] As discussed above in HEVC, (QP of luma + Chroma compensation) is used as an index for a table in order to derive the QP from chroma. This table could be modified to incorporate +/- 3 (that is, increasing / decreasing the values of the original table by 3). [000309] [000309] After the chroma QP has been derived, as with the normal HEVC process, the results could then be increased (or decreased) by 3. [000310] [000310] As an alternative to modifying QP, a factor of sqrt (2) or 1 / sqrt (2) can be used to modify the quantization coefficients. [000311] [000311] For forward / reverse quantization, the division / multiplication processes are implemented using (QP% 6) as an index for a table to obtain a quantization coefficient or quantization step size, inverseQStep / scaledQStep. (Here, QP% 6 means QP module 6). Note that, as discussed above, this may not represent the final quantization step size that is applied to the transformed data; can be further modified by MatricesQ before * use. [000312] [000312] HEVC prefixed tables are of length 6, covering one octave (one fold) of values. This is simply a means of. reduce storage requirements; the tables are extended for current use by selecting an entry in the table according to the QP module (mod 6) and then multiplying or dividing by an appropriate power of 2, depending on the difference of (QP - QP module 6) of a basic value predetermined. [000313] [000313] This arrangement could be varied to allow compensation of +/- 3 in the value of QP. Compensation can be applied to the lookup table process, or the module process discussed above could instead be performed using the modified QP. Assuming that Compensation is applied to the lookup table, however, additional entries in the table can be provided as follows: [000314] [000314] An alternative is to extend the tables by 3 entries, where the new entries are as follows (for index values of 6-8). [000315] [000315] The example table shown in Figure 30 would be indexed by [(QP% 6) + 3] (a "QP increment method"), where the notation QP% 6 means "QP module 6". [000316] [000316] The example table shown in Figure 31 would be indexed by [(QP% 6) - 3] (a "QP decrement method"), with extra entries for the index values from -1 to -3. [000317] [000317] Basic entropy coding comprises naming code words to input data symbols, where the shortest available code words are assigned to the most likely symbols in the input data. [000318] [000318] This basic scheme can be further improved by recognizing that symbol probability is often conditional on recent previous data, and consequently making the naming process adaptable to context. [000320] [000320] To extend entropy coding to the 4: 2: 2 scheme, which for example will use 4x8 chroma TUs instead of 4x4 TUs for an 8x8 luma TU, optionally the context variables can be provided simply by vertically repeating the selections of Equivalent CVs. [000321] [000321] However, in one embodiment of the present description, CV selections are not repeated for top-left coefficients (ie, high energy, DC and / or low spatial frequency coefficients), and instead new CVs are derived. In this case, for example, a mapping can be derived from the luma map. This approach can also be used for the 4: 4: 4 scheme. [000322] [000322] During coding, in the 4: 2: 0 scheme, a so-called zig scan scans the coefficients in order of high to low frequencies. However, again it is noted that the chroma TUs in the 4: 2: 2 scheme can be non-square, and so in one embodiment of the present description, a different chroma scan is proposed with the scan angle being inclined to make it more horizontal , or more generally, responsive to the aspect ratio of TU. [000323] [000323] Similarly, the neighborhood for selection of meaning map CVs and the cl / c2 system for CV selection greater than one and greater than two can be adapted accordingly. [000324] [000324] Likewise, in one embodiment of the present description, the last significant coefficient position (which becomes the starting point during decoding) could also be adjusted for the 4: 4: 4 scheme, with less significant positions for chroma TUs being differentially encoded from the least significant position in the colocalized luma TU. [000325] [000325] The coefficient scan can also be done: dependent on prediction mode for certain TU sizes. Consequently, a different scan order can be used for some TU sizes depending on the intraprediction mode. [000326] [000326] In the 4: 2: 0 scheme, mode dependent coefficient scanning (MDCS) is only applied for 4x4 / 8x8 luma TUs and 4x4 chroma TUs for intraprediction. MDCS is used depending on the intraprediction mode, with angles +/- 4 of the horizontal and vertical being considered. [000327] [000327] In one embodiment of the present description, it is proposed that in the 4: 2: 2 scheme, MDCS be applied to chroma TUs 4x8 and 8x4 for intraprediction. Similarly, it is proposed that in the 4 scheme: 4: 4, MDCS be applied to TUs of chroma 8x8 and 4x4. MDCS for 4: 2: 2 can only be done in horizontal or vertical directions, and the angle ranges may differ for chroma 4: 4: 4 vs luma 4: 4: 4 vs chroma 4: 2: 2 vs luma 4: 2: 2 vs luma 4: 2: 0. 'Unlock' mesh filters [000328] [000328] 'Unlocking' is applied to all CU, PU and TU limits, and the form of CU / PU / TU is not taken into account. The intensity and size of the filter are dependent on local statistics, and 'unlock' has a pixel granularity of Luma 8x8. [000329] [000329] Consequently, it is anticipated that the current 'unlock' applied to the 4: 2: 0 scheme should also be applicable for the 4: 2: 2 and 4: 4: 4 schemes. Sample-adaptive compensation [000330] [000330] In sample adaptive compensation (SAO), each channel is completely independent. SAO divides the image data for each channel using a quad-tree, and the resulting blocks are at least one LCU in size. The sheet blocks are aligned to LCU limits and each sheet can run in one of three ways, as determined by the encoder ("Center band compensation", "Sideband compensation" or - "Edge compensation"). Each sheet categorizes its pixels, and the encoder derives a Compensation value for each of the 16 categories by comparing the SAO input data to the original data. These Offsets are sent to the decoder. Compensation for the category of a decoded pixel is added to its value to minimize deviation from the source. [000331] [000331] In addition, SAO is enabled or disabled at the board level; if enabled for luma, it can also be enabled separately for each chroma channel. SAO will therefore be applied to the chroma only if applied to the luma. [000332] [000332] Consequently, the process is largely transparent to the underlying block scheme and it is anticipated that the current SAO applied to the 4: 2: 0 scheme should also be applicable for the 4: 2: 2 and 4: 4: 4 schemes. Adaptive mesh filtration [000333] [000333] In the 4: 2: 0 scheme, adaptive mesh filtering (ALF) is disabled in a preset manner. However, in principle (that is, if allowed) then ALF would be applied to the entire frame for chroma. [000334] [000334] In ALF, luma samples can be ordered in one of several categories, as determined by the HEVC documents; each category uses a different Wiener-based filter. [000335] [000335] By contrast, at 4: 2: 0, chroma samples are not categorized - there is only one Wiener-based filter for Cb, and one for Cr. [000336] [000336] Consequently, in an embodiment of the present description, based on the increased chroma information in the 4: 2: 2 and 4: 4: 4 schemes, it is proposed that the chroma samples be categorized; for example with K categories for 4: 2: 2 and J categories for 4: 4: 4. [000337] [000337] While in the 4: 2: 0 scheme, ALF can be disabled for luma on a CU basis using an ALF control flag (up to the “CU level specified by the ALF control depth), it can only be disabled for chroma on a per frame basis. Note that in HEVC, this depth is currently limited to the LCU level only. [000338] [000338] Consequently in an embodiment of the present description, the 4: 2: 2 and 4: 4: 4 schemes are provided with one or two channel-specific ALF control flags for chroma. [000339] [000339] In HEVC, syntax is already present to indicate 4: 2: 0, 4: 2: 2 or 4: 4: 4 schemes, and is indicated at the sequence level. However, in an embodiment of the present description, it is also proposed to indicate GBR 4: 4: 4 coding at this level. [000340] [000340] It will be appreciated that data signals generated by the coding apparatus variants discussed above, and means of storage or transmission - carrying such signals, are considered to represent embodiments of the present description. [000341] [000341] The respective characteristics of embodiments are defined by the following numbered clauses: [000342] [000342] Additional respective embodiments are defined by the following numbered clauses: [000343] [000343] Note that the detection step may not be required in a decoding arrangement, as the decoder can use the motion vectors provided this (for example, as part of the video data stream) by the encoder. 2. A method according to clause 1 where M = 2 and N = 1. 3. A method according to clause 2 in which incoming video data is in a 4: 2: 2 format. : 4. A method according to clause 3 in which the interpolated chrominance prediction unit has a height twice in samples that of an interpolated 4: 2: 0 format prediction unit using the same xR and xS interpolation filters . 5. A method according to clause 2, the method being separately operable, using the same horizontal xR and vertical xS interpolation filters, in relation to 4: 2: 0 input video data and 4: 2 input video data :2. 6. A method according to any of the previous clauses where P = 4. 7. A method according to any of the previous clauses where Q = 4. 8. A method according to any of the preceding clauses in which the step of sub-sampling the interpolated chrominance prediction unit comprises using the entire V-th sample of the interpolated chrominance prediction unit in the vertical direction. 9. A method according to any of the preceding clauses in which the step of sub-sampling the interpolated chrominance prediction unit comprises using the entire U-th sample of the interpolated chrominance prediction unit in the vertical direction. 10. A method according to any of the preceding clauses, comprising: deriving a luminance motion vector for a prediction unit; and deriving one or more chrominance motion vectors independently for that prediction unit. 11. A method according to any of the preceding clauses in which at least one of R and S is equal to 2 or more. 12. A method according to any of the clauses. previous, comprising applying the horizontal xR and vertical xS interpolation filter to the luminance components of the stored image. 13. A method according to any of the preceding clauses in which subsampling comprises selecting a subset of samples from the interpolated chrominance prediction unit. 14. A video encoding method using interimage prediction to encode video data where each chrominance component has 1 / Mth of the horizontal resolution of the luminance component and 1 / Nth of the vertical resolution of the luminance component, where M and N are integers equal to 1 or more, the method comprising: storing one or more images preceding a current image; interpolate a higher resolution version of prediction units for stored images so that the luminance component of an interpolated prediction unit has a horizontal resolution P times that of the corresponding portion of the stored image and a vertical resolution Q times that of the corresponding portion of the stored image, where P and Q are integers greater than 1; detect interimage movement between a current image and one or more interpolated stored images to generate vectors of movement between a prediction unit of the current image and areas of one or more previous images; and generating a motion compensated prediction from the current image prediction unit with respect to an area of an interpolated stored image pointed to by a respective motion vector; | where the interpolation step comprises: applying a horizontal xR and vertical xS interpolation filter to the chrominance components of a stored image to generate an 'interpolated chrominance prediction unit, where R is equal to (Ux M x P) and Equal ( VxNxQ), U and V being integers equal to 1 or more; and subsample the interpolated chrominance prediction unit, such that its horizontal resolution is divided by a U factor and its vertical resolution is divided by a V factor, thereby resulting in a block of MP x NQ samples. 15. computer software that, when executed by a computer, causes the computer to execute a method in accordance with any of the preceding clauses. 16. A machine-readable, non-transitory storage medium storing software in accordance with clause 15. 17. A data signal comprising encoded data generated according to the method of any one of clauses 1 to 14. 18. A video decoding device using interimage prediction to encode input video data where each chrominance component has 1 / Mth of the horizontal resolution of the luminance component and 1 / Nth of the vertical resolution of the luminance component , where M and N are integers equal to | or more, the apparatus comprising: an image storage configured to store one or more images preceding a current image; an interpolator configured to interpolate a higher resolution version of prediction units of stored images so that the luminance component of an interpolated prediction unit has a horizontal resolution P times that of the corresponding portion of the stored image and a vertical resolution Q times that of the corresponding portion of the stored image, where P and Q are integers greater than 1; i a detector configured to detect motion. interimage between a current image and one or more stored images interpolated to generate motion vectors between a prediction unit of the current image and areas of one or more previous images; and a generator configured to generate a motion compensated prediction from the current image prediction unit with respect to an area of an interpolated stored image pointed to by a respective motion vector; where the interpolator is configured to: apply a horizontal xR and vertical xS interpolation filter to the chrominance components of a stored image to generate an interpolated chrominance prediction unit, where R is equal to (Ux M x P) eSequal (VxNxQ ), U and V being integers equal to | or more; and subsample the interpolated chrominance prediction unit, such that its horizontal resolution is divided by a U factor and its vertical resolution is divided by a V factor, thereby resulting in a block of MP x NQ samples. 19. A video encoding device using prediction | interimage to encode incoming video data where each | chrominance component has 1 / Mth of the horizontal resolution of the | luminance component and 1 / N-th of the component's vertical resolution | luminance, where M and N are integers equal to | or more, the apparatus comprising: an image storage configured to store one or more images preceding a current image; an interpolator configured to interpolate a higher resolution version of prediction units for stored images so that the luminance component of an interpolated prediction unit has a horizontal resolution P times that of the corresponding i portion of the stored image and a vertical resolution Q times * that of the corresponding portion of the stored image, where P and Q are integers greater than |; a detector configured to detect interimage movement between a current image and one or more stored images interpolated to generate motion vectors between a prediction unit of the current image and areas of one or more previous images; and a generator configured to generate a motion compensated prediction from the current image prediction unit with respect to an area of an interpolated stored image pointed to by a respective motion vector; where the interpolator is configured to: apply a horizontal xR and vertical xS interpolation filter to the chrominance components of a stored image to generate an interpolated chrominance prediction unit, where R is equal to (Ux M x P) eSequal (VxNxQ ), U and V being integers equal to | or more; and subsample the interpolated chrominance prediction unit, such that its horizontal resolution is divided by a U factor and its vertical resolution is divided by a V factor, thereby resulting in a block of MP x NQ samples. 20. Video capture, display, transmission, reception and / or storage device comprising a device in accordance with clause 18 or clause 19.
权利要求:
Claims (11) [1] 1. Video decoding method for video signal decoding, the method being operable in relation to a 4: 2: 0 and 4: 2: 2 video format signal, the method characterized by the fact that it comprises: generating units of prediction of a current image from one or more stored images preceding the current image, the generation step comprising: in relation to a 4: 2: 0 video format signal, deriving samples from a prediction unit interpolated by using an 8-lead x4 luminance filter applied vertically and horizontally to a luminance prediction unit and by applying a 4-lead x8 chrominance filter vertically and horizontally to a chrominance prediction unit; and in relation to a 4: 2: 2 video format signal; derive samples from a chrominance prediction unit by applying the same 4-lead x8 chrominance filter horizontally and vertically to a chrominance prediction unit, but using only fractional results from alternating positions as samples interpolated in the vertical direction. [2] 2. Method according to claim 1, characterized by the fact that the generation step comprises: accessing samples of the interpolated prediction unit pointed by a motion vector to generate a compensated motion prediction of a prediction unit of the current image in relation to to an area represented by the samples from the interpolated prediction unit. [3] 3. Method according to claim 1, the method characterized by the fact that it is also operable in relation to 4: 4: 4 video data, the same 4-lead x8 chrominance filter being applied horizontally and vertically to generate a sample of a prediction unit of interpolated chrominance, but only fractional results from alternating positions being used as interpolated samples in horizontal and vertical directions. [4] 4. Method according to claim 2, characterized by the fact that the motion vector is a subpixel precision motion vector. [5] 5. Method according to claim 1, characterized by the fact that alternating fractional position results are still fractional position results. [6] 6. Video encoding method to encode a video signal, the method being operable in relation to a 4: 2: 0 and 4: 2: 2 video format signal, the method characterized by the fact that it comprises: generating vectors of movement for prediction of prediction units of a current image from one or more stored images preceding the current image, the generation step comprising: in relation to a 4: 2: 0 video format signal, deriving samples from a unit interpolated prediction by using an 8-lead x4 luminance filter applied vertically and horizontally to a stored luminance prediction unit and by applying a 4-lead x8 chrominance filter vertically and horizontally to a chrominance prediction unit; and in relation to a 4: 2: 2 video format signal; derive samples from an interpolated chrominance prediction unit by applying the same 4-lead x8 chrominance filter horizontally and vertically, but using only alternating fractional position results as samples interpolated in the vertical direction. [7] 7. Recording medium, characterized by the fact that it comprises instructions that, when executed by a computer, perform a method as defined in any of the claims | to 5. [8] 8. Data signal characterized by the fact that it comprises encoded data generated by the method as defined in claim 6. [9] 9. Video decoding device to decode a video signal, the device being operable in relation to a 4: 2: 0 and 4: 2: 2 video format signal, the device characterized by the fact that it comprises: a video filter interpolation configured to generate prediction units for a current image from one or more stored images preceding the current image by: in relation to a 4: 2: 0 video format signal, derive samples from a predicted unit interpolated by use an 8-lead x4 luminance filter applied vertically and horizontally to a stored luminance prediction unit and by applying a 4-lead x8 chrominance filter vertically and horizontally to a chrominance prediction unit; and in relation to a 4: 2: 2 video format signal; derive samples from an interpolated chrominance prediction unit by applying the same 4-lead x8 chrominance filter horizontally and vertically, but using only alternating fractional position results as samples interpolated in the vertical direction. [10] 10. Video coding apparatus to encode a video signal, the apparatus being operable in relation to a 4: 2: 0 and 4: 2: 2 video format signal, the apparatus characterized by the fact that it comprises: a motion vector configured to generate motion vectors for predicting the prediction units of a current image from one or more stored images preceding the current image, for: in relation to a 4: 2: 0 video format signal, derive samples from an interpolated prediction unit by using an 8-lead x4 luminance filter applied vertically and horizontally to a stored luminance prediction unit and by applying a 4-lead x8 chrominance filter vertically and horizontally to a prediction unit chrominance; and in relation to a 4: 2: 2 video format signal; derive samples from an interpolated chrominance prediction unit by applying the same 4-lead x8 chrominance filter horizontally and vertically, but using only alternating fractional position results as samples interpolated in the vertical direction. [11] 11. Video capture, display, transmission, reception and / or video storage apparatus as defined in claim 10.
类似技术:
公开号 | 公开日 | 专利标题 BR112014026035A2|2020-06-30|video decoding and encoding methods and devices, computer software, non-transitory storage medium, data signal, and video capture, display, transmission, reception and / or storage devices EP2858367A2|2015-04-08|Quantisation parameter selection for different colour sampling formats
同族专利:
公开号 | 公开日 TWI711301B|2020-11-21| WO2013160694A4|2014-01-03| CN108347604B|2022-03-04| CN104255029A|2014-12-31| EP2842321A2|2015-03-04| TW201817239A|2018-05-01| US20190253702A1|2019-08-15| WO2013160699A1|2013-10-31| EP2842316A1|2015-03-04| US10205941B2|2019-02-12| JP6606794B2|2019-11-20| TWI677236B|2019-11-11| AU2013254448B2|2015-11-19| JP2018129858A|2018-08-16| CN109068135A|2018-12-21| CN104272739B|2018-04-20| KR20170041288A|2017-04-14| KR101840025B1|2018-03-20| CN108134931B|2022-03-01| GB2501557A|2013-10-30| JP2015518339A|2015-06-25| GB2501535A|2013-10-30| WO2013160695A1|2013-10-31| JP2018142970A|2018-09-13| GB2501547A|2013-10-30| US20150085924A1|2015-03-26| CN108347604A|2018-07-31| US20150063457A1|2015-03-05| US10674144B2|2020-06-02| CA3028883C|2021-10-26| GB201211072D0|2012-08-01| GB201211066D0|2012-08-01| MX346235B|2017-03-13| BR112014026024A8|2021-06-22| BR112014026024A2|2017-06-27| US20200014916A1|2020-01-09| CA2870591A1|2013-10-31| GB201211067D0|2012-08-01| JP2017055444A|2017-03-16| US20170359576A1|2017-12-14| KR101994597B1|2019-06-28| GB201211629D0|2012-08-15| WO2013160693A3|2013-12-19| CN104247425A|2014-12-24| EP3442229A1|2019-02-13| JP2015518342A|2015-06-25| RU2014147445A|2016-06-10| RU2014147453A|2016-06-20| AU2013254444B2|2015-10-29| CN104247425B|2018-11-06| US9674531B2|2017-06-06| US20190222833A1|2019-07-18| CN110933409A|2020-03-27| CN104285445A|2015-01-14| GB2501546A|2013-10-30| US20180227577A1|2018-08-09| US20150043641A1|2015-02-12| JP2016201813A|2016-12-01| GB2501554B|2014-09-24| GB201211628D0|2012-08-15| MX343351B|2016-11-03| JP6050478B2|2016-12-21| US10440358B2|2019-10-08| TWI575942B|2017-03-21| JP5965054B2|2016-08-03| US9686547B2|2017-06-20| GB201211073D0|2012-08-01| TW201408081A|2014-02-16| KR101734974B1|2017-05-15| GB201211623D0|2012-08-15| EP2842314A2|2015-03-04| GB2501555A|2013-10-30| EP2842314B1|2021-06-02| US20170272743A1|2017-09-21| CN110225337A|2019-09-10| US20150172652A1|2015-06-18| US20150078447A1|2015-03-19| CN107734330A|2018-02-23| AU2013254448A1|2014-11-06| RU2599935C2|2016-10-20| US9686548B2|2017-06-20| WO2013160656A3|2013-12-19| AU2013254444A1|2014-10-23| GB2501548A|2013-10-30| WO2013160694A1|2013-10-31| EP2842315A1|2015-03-04| KR20140145606A|2014-12-23| RU2603548C2|2016-11-27| RU2017116180A3|2020-05-27| WO2013160656A2|2013-10-31| TW201408080A|2014-02-16| AU2016204227B2|2018-09-06| CN107734330B|2020-04-28| CA3128787A1|2013-10-31| BR112014026021A2|2017-06-27| RU2014147451A|2016-06-20| US20180124392A1|2018-05-03| CA3028883A1|2013-10-31| US10616572B2|2020-04-07| GB2501549A|2013-10-30| MX2014012847A|2015-10-22| CA2871556A1|2013-10-31| JP5986294B2|2016-09-06| GB201211070D0|2012-08-01| US10531083B2|2020-01-07| US20200177874A1|2020-06-04| TWI611689B|2018-01-11| US10499052B2|2019-12-03| US9826231B2|2017-11-21| US20200213584A1|2020-07-02| CA2870602A1|2013-10-31| JP2015515236A|2015-05-21| JP6128707B2|2017-05-17| JP2018042263A|2018-03-15| US20180160112A1|2018-06-07| JP2015518343A|2015-06-25| CN104247426A|2014-12-24| US10291909B2|2019-05-14| CA2871556C|2017-08-29| JP2015518341A|2015-06-25| WO2013160697A1|2013-10-31| JP6325606B2|2018-05-16| US11109019B2|2021-08-31| AU2016204227A1|2016-07-14| AU2013254443A1|2014-10-23| JP6503492B2|2019-04-17| GB2501550A|2013-10-30| KR20170054561A|2017-05-17| BR112014026021A8|2021-06-22| WO2013160700A1|2013-10-31| GB2501551A|2013-10-30| CN104255029B|2017-11-24| CN104272739A|2015-01-07| JP2015518340A|2015-06-25| TW201408079A|2014-02-16| US20150117527A1|2015-04-30| CN104255030A|2014-12-31| GB2501566A|2013-10-30| MX2014012565A|2014-12-05| US20170339402A1|2017-11-23| CN111182299A|2020-05-19| TWI586152B|2017-06-01| GB201211069D0|2012-08-01| JP6407389B2|2018-10-17| GB201207459D0|2012-06-13| MX2014012846A|2015-10-22| US9948929B2|2018-04-17| RU2751080C2|2021-07-08| US11252402B2|2022-02-15| EP2842317A1|2015-03-04| WO2013160698A1|2013-10-31| CN104255030B|2018-04-03| CN108366257A|2018-08-03| US10841572B2|2020-11-17| MX345237B|2017-01-23| GB2501554A|2013-10-30| US20190191154A1|2019-06-20| KR20180030263A|2018-03-21| US20150063460A1|2015-03-05| GB2501556A|2013-10-30| KR20190077110A|2019-07-02| AU2018217304B2|2020-08-06| WO2013160693A2|2013-10-31| KR102104400B1|2020-04-24| GB2501553A|2013-10-30| RU2017116180A|2019-01-28| JP6231647B2|2017-11-15| RU2619888C2|2017-05-19| GB201211619D0|2012-08-15| CN108134931A|2018-06-08| CA2870596A1|2013-10-31| GB201220836D0|2013-01-02| US10419750B2|2019-09-17| CA2870596C|2019-02-12| KR20150003219A|2015-01-08| TW201937934A|2019-09-16| US10827169B2|2020-11-03| WO2013160696A1|2013-10-31| US9693058B2|2017-06-27| GB201211075D0|2012-08-01| US10244232B2|2019-03-26| JP6328613B2|2018-05-23| AU2018217304A1|2018-09-06|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 GB9013217D0|1990-06-13|1990-08-01|Indep Broadcasting Authority|Evaluation of detail in video images,and applications thereof| US5294974A|1992-07-24|1994-03-15|Matsushita Electric Corporation Of America|High-definition video encoding system having color-sensitive quantization| JPH07121687A|1993-10-20|1995-05-12|Sony Corp|Processor for image codec and access pattern conversion method| KR100208375B1|1995-12-27|1999-07-15|윤종용|Method and apparatus for encoding moving picture| US5737023A|1996-02-05|1998-04-07|International Business Machines Corporation|Hierarchical motion estimation for interlaced video| EP1835761A3|1996-05-28|2007-10-03|Matsushita Electric Industrial Co., Ltd.|Decoding apparatus and method with intra prediction and alternative block scanning| FR2752474B1|1996-08-14|1998-12-31|Iona Donescu|PROCESS FOR TRANSFORMING THE IMAGE SIGNAL ON ARBITRARY MEDIA| KR100442229B1|1996-09-13|2004-10-08|엘지전자 주식회사|Simplified HDTV video decoder and decoding method| TW366648B|1996-10-24|1999-08-11|Matsushita Electric Ind Co Ltd|Method of supplementing pixel signal coding device, and pixel signal decoding device| AU9388298A|1997-09-19|1999-04-12|Sony Electronics Inc.|Motion compensated digital video decoding with buffered picture storage memory map| AR025609A1|1999-09-13|2002-12-04|Hoffmann La Roche|SOLID LIPID FORMULATIONS| US6647061B1|2000-06-09|2003-11-11|General Instrument Corporation|Video size conversion and transcoding from MPEG-2 to MPEG-4| JP4254147B2|2001-11-30|2009-04-15|ソニー株式会社|Image information encoding method and apparatus, program, and recording medium| EP1827029A1|2002-01-18|2007-08-29|Kabushiki Kaisha Toshiba|Video decoding method and apparatus| CN100448285C|2002-12-18|2008-12-31|索尼株式会社|Information processing device, information processing method and program, and recording medium| US8824553B2|2003-05-12|2014-09-02|Google Inc.|Video compression method| DE10394246T5|2003-06-06|2006-05-04|Mediatek Inc.|Apparatus and method for signal processing of format conversion and combination of video signals| EP1509045A3|2003-07-16|2006-08-09|Samsung Electronics Co., Ltd.|Lossless image encoding/decoding method and apparatus using intercolor plane prediction| JP4617644B2|2003-07-18|2011-01-26|ソニー株式会社|Encoding apparatus and method| US7193656B2|2003-08-14|2007-03-20|Broadcom Corporation|Line address computer for providing coefficients to a chroma filter| US7724827B2|2003-09-07|2010-05-25|Microsoft Corporation|Multi-layer run level encoding and decoding| US7317839B2|2003-09-07|2008-01-08|Microsoft Corporation|Chroma motion vector derivation for interlaced forward-predicted fields| US7620106B2|2003-09-07|2009-11-17|Microsoft Corporation|Joint coding and decoding of a reference field selection and differential motion vector information| US7630435B2|2004-01-30|2009-12-08|Panasonic Corporation|Picture coding method, picture decoding method, picture coding apparatus, picture decoding apparatus, and program thereof| KR100813958B1|2004-06-07|2008-03-14|세종대학교산학협력단|Method of lossless encoding and decoding, and apparatus thereof| JP4763422B2|2004-12-03|2011-08-31|パナソニック株式会社|Intra prediction device| KR100843196B1|2004-12-17|2008-07-02|삼성전자주식회사|Deblocking filter of H.264/AVC video decoder| KR101138392B1|2004-12-30|2012-04-26|삼성전자주식회사|Color image encoding and decoding method and apparatus using a correlation between chrominance components| US9130799B2|2005-03-23|2015-09-08|Alcatel Lucent|System and method for effectuating playlist seeking with respect to digital multimedia content from a network node| WO2006118383A1|2005-04-29|2006-11-09|Samsung Electronics Co., Ltd.|Video coding method and apparatus supporting fast fine granular scalability| WO2007027008A1|2005-07-15|2007-03-08|Samsung Electronics Co., Ltd.|Method and apparatus for encoding and decoding image| CA2732532C|2005-07-22|2013-08-20|Mitsubishi Electric Corporation|Image decoder that decodes a color image signal and related method| US7933337B2|2005-08-12|2011-04-26|Microsoft Corporation|Prediction of transform coefficients for image compression| US8199819B2|2005-10-21|2012-06-12|Electronics And Telecommunications Research Institute|Apparatus and method for encoding and decoding moving picture using adaptive scanning| JP5143120B2|2006-03-23|2013-02-13|サムスンエレクトロニクスカンパニーリミテッド|Image encoding method and apparatus, decoding method and apparatus| US9001899B2|2006-09-15|2015-04-07|Freescale Semiconductor, Inc.|Video information processing system with selective chroma deblock filtering| EP2448269A1|2006-10-10|2012-05-02|Nippon Telegraph And Telephone Corporation|Video encoding method and decoding method, apparatuses therefor, programs therefor, and storage media for recording the programs| US8121195B2|2006-11-30|2012-02-21|Lsi Corporation|Memory reduced H264/MPEG-4 AVC codec| JP5026092B2|2007-01-12|2012-09-12|三菱電機株式会社|Moving picture decoding apparatus and moving picture decoding method| US20080170624A1|2007-01-12|2008-07-17|Mitsubishi Electric Corporation|Image encoding device and image encoding method| JP2008193627A|2007-01-12|2008-08-21|Mitsubishi Electric Corp|Image encoding device, image decoding device, image encoding method, and image decoding method| KR101539240B1|2007-06-14|2015-07-30|삼성전자주식회사|Method and apparatus for encoding and decoding image data| US8428133B2|2007-06-15|2013-04-23|Qualcomm Incorporated|Adaptive coding of video block prediction mode| JP2009004920A|2007-06-19|2009-01-08|Panasonic Corp|Image encoder and image encoding method| US8422803B2|2007-06-28|2013-04-16|Mitsubishi Electric Corporation|Image encoding device, image decoding device, image encoding method and image decoding method| US8265144B2|2007-06-30|2012-09-11|Microsoft Corporation|Innovations in video decoder implementations| US8184711B2|2007-09-12|2012-05-22|Sony Corporation|Image processing device and image processing method| JP2009081579A|2007-09-25|2009-04-16|Toshiba Corp|Motion picture decoding apparatus and motion picture decoding method| US8194741B2|2007-10-12|2012-06-05|Broadcom Corporation|Method and system for processing B pictures with missing or invalid forward reference pictures| US8660175B2|2007-12-10|2014-02-25|Qualcomm Incorporated|Selective display of interpolated or extrapolated video units| JP2009206911A|2008-02-28|2009-09-10|Mitsubishi Electric Corp|Moving image conversion device| KR101493905B1|2008-08-12|2015-03-02|삼성전자 주식회사|Image processing apparatus and method of image processing thereof| EP2157799A1|2008-08-18|2010-02-24|Panasonic Corporation|Interpolation filter with local adaptationbased on block edges in the reference frame| JP4952685B2|2008-08-26|2012-06-13|株式会社Jvcケンウッド|Video signal encoding device| US8548041B2|2008-09-25|2013-10-01|Mediatek Inc.|Adaptive filter| US9078007B2|2008-10-03|2015-07-07|Qualcomm Incorporated|Digital video coding with interpolation filters and offsets| US8483285B2|2008-10-03|2013-07-09|Qualcomm Incorporated|Video coding using transforms bigger than 4×4 and 8×8| CN102187670B|2008-10-15|2014-05-14|法国电信公司|Prediction of image by compensation during forward movement| EP2187647A1|2008-11-12|2010-05-19|Sony Corporation|Method and device for approximating a DC coefficient of a block of pixels of a frame| CN101742328B|2008-11-14|2013-03-27|北京中星微电子有限公司|Method and device for integer transform of image residual matrix, and method and device for inverse transform of image residual matrix| US8578272B2|2008-12-31|2013-11-05|Apple Inc.|Real-time or near real-time streaming| TW201028018A|2009-01-07|2010-07-16|Ind Tech Res Inst|Encoder, decoder, encoding method and decoding method| US20100178038A1|2009-01-12|2010-07-15|Mediatek Inc.|Video player| JP2010177809A|2009-01-27|2010-08-12|Toshiba Corp|Moving image encoding apparatus and moving image decoding apparatus| JP5502336B2|2009-02-06|2014-05-28|パナソニック株式会社|Video signal encoding apparatus and video signal encoding method| JP5275454B2|2009-03-31|2013-08-28|パナソニック株式会社|Image decoding device| EP2237557A1|2009-04-03|2010-10-06|Panasonic Corporation|Coding for filter coefficients| JP5158003B2|2009-04-14|2013-03-06|ソニー株式会社|Image coding apparatus, image coding method, and computer program| JP5169978B2|2009-04-24|2013-03-27|ソニー株式会社|Image processing apparatus and method| US8761531B2|2009-07-09|2014-06-24|Qualcomm Incorporated|Image data compression involving sub-sampling of luma and chroma values| JP5234368B2|2009-09-30|2013-07-10|ソニー株式会社|Image processing apparatus and method| US8477845B2|2009-10-16|2013-07-02|Futurewei Technologies, Inc.|Predictive adaptive scan ordering for video coding| TW201125370A|2009-10-30|2011-07-16|Panasonic Corp|Decoding method, decoder apparatus, encoding method, and encoder apparatus| JPWO2011061880A1|2009-11-19|2013-04-04|三菱電機株式会社|Image encoding device, image decoding device, image encoding method, and image decoding method| WO2011080806A1|2009-12-28|2011-07-07|富士通株式会社|Moving picture coding device and moving picture decoding device| TWI562600B|2010-02-08|2016-12-11|Nokia Technologies Oy|An apparatus, a method and a computer program for video coding| US20110200108A1|2010-02-18|2011-08-18|Qualcomm Incorporated|Chrominance high precision motion filtering for motion interpolation| CN102164284A|2010-02-24|2011-08-24|富士通株式会社|Video decoding method and system| KR101503269B1|2010-04-05|2015-03-17|삼성전자주식회사|Method and apparatus for determining intra prediction mode of image coding unit, and method and apparatus for determining intra predion mode of image decoding unit| JP2011223303A|2010-04-09|2011-11-04|Sony Corp|Image encoding device and image encoding method, and image decoding device and image decoding method| US8929440B2|2010-04-09|2015-01-06|Sony Corporation|QP adaptive coefficients scanning and application| WO2011129090A1|2010-04-13|2011-10-20|パナソニック株式会社|Encoding distortion removal method, encoding method, decoding method, encoding distortion removal device, encoding device and decoding device| CN102236502A|2010-04-21|2011-11-09|上海三旗通信科技有限公司|Pressure touch gesture recognition human-computer interaction way for mobile terminal| US20110317757A1|2010-06-25|2011-12-29|Qualcomm Incorporated|Intra prediction mode signaling for finer spatial prediction directions| US9661338B2|2010-07-09|2017-05-23|Qualcomm Incorporated|Coding syntax elements for adaptive scans of transform coefficients for video coding| KR101729051B1|2010-07-20|2017-04-21|가부시키가이샤 엔.티.티.도코모|Image prediction decoding device and image prediction decoding method| ES2693903T3|2010-08-17|2018-12-14|M&K Holdings Inc.|Apparatus for decoding an intra-prediction mode| CN108668137A|2010-09-27|2018-10-16|Lg 电子株式会社|Method for dividing block and decoding device| CN102447895B|2010-09-30|2013-10-02|华为技术有限公司|Scanning method, scanning device, anti-scanning method and anti-scanning device| US8885704B2|2010-10-01|2014-11-11|Qualcomm Incorporated|Coding prediction modes in video coding| CN101938657B|2010-10-07|2012-07-04|西安电子科技大学|Self-adaptively dividing method for code units in high-efficiency video coding| US20120134425A1|2010-11-29|2012-05-31|Faouzi Kossentini|Method and System for Adaptive Interpolation in Digital Video Coding| CN107197257B|2010-12-08|2020-09-08|Lg 电子株式会社|Intra prediction method performed by encoding apparatus and decoding apparatus| JP5741076B2|2010-12-09|2015-07-01|ソニー株式会社|Image processing apparatus and image processing method| US10045046B2|2010-12-10|2018-08-07|Qualcomm Incorporated|Adaptive support for interpolating values of sub-pixels for video coding| US9049444B2|2010-12-22|2015-06-02|Qualcomm Incorporated|Mode dependent scanning of coefficients of a block of video data| US9172972B2|2011-01-05|2015-10-27|Qualcomm Incorporated|Low complexity interpolation filtering with adaptive tap size| GB2487242A|2011-01-17|2012-07-18|Sony Corp|Interpolation Using Shear Transform| JP2012186617A|2011-01-31|2012-09-27|Sony Corp|Image processing device and method| ES2708940T3|2011-02-10|2019-04-12|Velos Media Int Ltd|Image processing device and image processing procedure| WO2012113197A1|2011-02-24|2012-08-30|中兴通讯股份有限公司|Encoding or decoding method of prediction mode, and device thereof| JP2012175615A|2011-02-24|2012-09-10|Sony Corp|Image processing apparatus and image processing method| US20120230418A1|2011-03-08|2012-09-13|Qualcomm Incorporated|Coding of transform coefficients for video coding| US8494290B2|2011-05-05|2013-07-23|Mitsubishi Electric Research Laboratories, Inc.|Method for coding pictures using hierarchical transform units| US9602839B2|2011-06-15|2017-03-21|Futurewei Technologies, Inc.|Mode dependent intra smoothing filter table mapping methods for non-square prediction units| JP5678814B2|2011-06-20|2015-03-04|株式会社Jvcケンウッド|Image encoding device, image encoding method, image encoding program, transmission device, transmission method, and transmission program| KR101753551B1|2011-06-20|2017-07-03|가부시키가이샤 제이브이씨 켄우드|Image encoding device, image encoding method and recording medium storing image encoding program| CN103597833A|2011-06-22|2014-02-19|索尼公司|Image processing device and method| JP5990948B2|2011-06-22|2016-09-14|セイコーエプソン株式会社|projector| CN105791834B|2011-06-23|2018-01-02|Jvc 建伍株式会社|Picture decoding apparatus and picture decoding method| JP5614381B2|2011-06-23|2014-10-29|株式会社Jvcケンウッド|Image encoding device, image encoding method, image encoding program, transmission device, transmission method, and transmission program| US9854275B2|2011-06-25|2017-12-26|Qualcomm Incorporated|Quantization in video coding| JP5907367B2|2011-06-28|2016-04-26|ソニー株式会社|Image processing apparatus and method, program, and recording medium| US20130044811A1|2011-08-18|2013-02-21|Hyung Joon Kim|Content-Based Adaptive Control of Intra-Prediction Modes in Video Encoding| MX2014002749A|2011-09-08|2014-04-16|Motorola Mobility Llc|Methods and apparatus for quantization and dequantization of a rectangular block of coefficients.| US9787982B2|2011-09-12|2017-10-10|Qualcomm Incorporated|Non-square transform units and prediction units in video coding| US9762899B2|2011-10-04|2017-09-12|Texas Instruments Incorporated|Virtual memory access bandwidth verification in video coding| US9807401B2|2011-11-01|2017-10-31|Qualcomm Incorporated|Transform unit partitioning for chroma components in video coding| GB2496201A|2011-11-07|2013-05-08|Sony Corp|Context adaptive data encoding and decoding| GB201119180D0|2011-11-07|2011-12-21|Sony Corp|Data encoding and decoding| GB2496197A|2011-11-07|2013-05-08|Sony Corp|Frequency Domain Video Data Reordering for Encoding| GB2496194A|2011-11-07|2013-05-08|Sony Corp|Entropy encoding video data using reordering patterns| BR112014004645A2|2011-11-08|2017-03-14|Toshiba Kk|image encoding method, image decoding method, image encoding apparatus and image decoding apparatus| US9451252B2|2012-01-14|2016-09-20|Qualcomm Incorporated|Coding parameter sets and NAL unit headers for video coding| US9363516B2|2012-01-19|2016-06-07|Qualcomm Incorporated|Deblocking chroma data for video coding| JP5995448B2|2012-01-19|2016-09-21|シャープ株式会社|Image decoding apparatus and image encoding apparatus| US8581753B2|2012-01-19|2013-11-12|Sharp Laboratories Of America, Inc.|Lossless coding technique for CABAC in HEVC| WO2013109471A1|2012-01-19|2013-07-25|Vid Scale, Inc.|System and method of video coding quantization and dynamic range control| US9538200B2|2012-01-19|2017-01-03|Qualcomm Incorporated|Signaling of deblocking filter parameters in video coding| US9106936B2|2012-01-25|2015-08-11|Altera Corporation|Raw format image data processing| US9123278B2|2012-02-24|2015-09-01|Apple Inc.|Performing inline chroma downsampling with reduced power consumption| US9325991B2|2012-04-11|2016-04-26|Qualcomm Incorporated|Motion vector rounding| GB2501535A|2012-04-26|2013-10-30|Sony Corp|Chrominance Processing in High Efficiency Video Codecs| US9253483B2|2012-09-25|2016-02-02|Google Technology Holdings LLC|Signaling of scaling list| US9667994B2|2012-10-01|2017-05-30|Qualcomm Incorporated|Intra-coding for 4:2:2 sample format in video coding|DK2947877T3|2010-04-23|2017-01-23|M&K Holdings Inc|Apparatus for encoding an image| US9693054B2|2010-12-22|2017-06-27|Lg Electronics Inc.|Intra prediction method and apparatus based on interpolation| EP2777250B1|2011-11-07|2017-05-31|Intel Corporation|Cross-channel residual prediction| GB2501535A|2012-04-26|2013-10-30|Sony Corp|Chrominance Processing in High Efficiency Video Codecs| US9414054B2|2012-07-02|2016-08-09|Microsoft Technology Licensing, Llc|Control and use of chroma quantization parameter values| US9591302B2|2012-07-02|2017-03-07|Microsoft Technology Licensing, Llc|Use of chroma quantization parameter offsets in deblocking| CN104782125B|2012-11-08|2019-03-15|佳能株式会社|To the method, apparatus and system of the change of scale coding and decoding of coding units| AU2012232992A1|2012-09-28|2014-04-17|Canon Kabushiki Kaisha|Method, apparatus and system for encoding and decoding the transform units of a coding unit| US9667994B2|2012-10-01|2017-05-30|Qualcomm Incorporated|Intra-coding for 4:2:2 sample format in video coding| US9743091B2|2012-12-17|2017-08-22|Lg Electronics Inc.|Method for encoding/decoding image, and device using same| JP6005572B2|2013-03-28|2016-10-12|Kddi株式会社|Moving picture encoding apparatus, moving picture decoding apparatus, moving picture encoding method, moving picture decoding method, and program| RU2630886C2|2013-03-29|2017-09-13|ДжейВиСи КЕНВУД КОРПОРЕЙШН|Image decoding device, image decoding method and image decoding program| AU2013202653A1|2013-04-05|2014-10-23|Canon Kabushiki Kaisha|Method, apparatus and system for generating intra-predicted samples| GB2513111A|2013-04-08|2014-10-22|Sony Corp|Data encoding and decoding| WO2014166381A1|2013-04-09|2014-10-16|Mediatek Singapore Pte. Ltd.|Method and apparatus for non-square intra mode coding| US9686561B2|2013-06-17|2017-06-20|Qualcomm Incorporated|Inter-component filtering| CN105474639B|2013-07-10|2018-08-07|凯迪迪爱通信技术有限公司|Video coding apparatus, video decoder, video system, method for video coding, video encoding/decoding method and program| AU2013395426B2|2013-07-24|2017-11-30|Microsoft Technology Licensing, Llc|Scanning orders for non-transform coding| US9510002B2|2013-09-09|2016-11-29|Apple Inc.|Chroma quantization in video coding| US9813737B2|2013-09-19|2017-11-07|Blackberry Limited|Transposing a block of transform coefficients, based upon an intra-prediction mode| KR101530774B1|2013-09-30|2015-06-22|연세대학교 산학협력단|Method, apparatus and system for image encoding and decoding| WO2015054813A1|2013-10-14|2015-04-23|Microsoft Technology Licensing, Llc|Encoder-side options for intra block copy prediction mode for video and image coding| CN105765974B|2013-10-14|2019-07-02|微软技术许可有限责任公司|Feature for the intra block of video and image coding and decoding duplication prediction mode| RU2666635C2|2013-10-14|2018-09-11|МАЙКРОСОФТ ТЕКНОЛОДЖИ ЛАЙСЕНСИНГ, ЭлЭлСи|Features of base colour index map mode for video and image coding and decoding| KR101530782B1|2013-12-03|2015-06-22|연세대학교 산학협력단|Method, apparatus and system for image encoding and decoding| MX364028B|2013-12-27|2019-04-11|Sony Corp|Image processing device and method.| US10390034B2|2014-01-03|2019-08-20|Microsoft Technology Licensing, Llc|Innovations in block vector prediction and estimation of reconstructed sample values within an overlap area| EP3090553A4|2014-01-03|2017-12-20|Microsoft Technology Licensing, LLC|Block vector prediction in video and image coding/decoding| US10368097B2|2014-01-07|2019-07-30|Nokia Technologies Oy|Apparatus, a method and a computer program product for coding and decoding chroma components of texture pictures for sample prediction of depth pictures| US10542274B2|2014-02-21|2020-01-21|Microsoft Technology Licensing, Llc|Dictionary encoding and decoding of screen content| JP5897159B2|2014-02-25|2016-03-30|キヤノン株式会社|Display device and control method thereof| EP3114610A4|2014-03-03|2017-11-15|Sony Corporation|Strong intra smoothing for in rext| US10368091B2|2014-03-04|2019-07-30|Microsoft Technology Licensing, Llc|Block flipping and skip mode in intra block copy prediction| JP6731574B2|2014-03-06|2020-07-29|パナソニックIpマネジメント株式会社|Moving picture coding apparatus and moving picture coding method| WO2015182342A1|2014-05-26|2015-12-03|シャープ株式会社|Image decoding device, and image encoding device| US10715833B2|2014-05-28|2020-07-14|Apple Inc.|Adaptive syntax grouping and compression in video data using a default value and an exception value| US10142642B2|2014-06-04|2018-11-27|Qualcomm Incorporated|Block adaptive color-space conversion coding| WO2015192353A1|2014-06-19|2015-12-23|Microsoft Technology Licensing, Llc|Unified intra block copy and inter prediction modes| KR20170026334A|2014-07-06|2017-03-08|엘지전자 주식회사|Method for processing video signal, and apparatus therefor| CN105874795B|2014-09-30|2019-11-29|微软技术许可有限责任公司|When wavefront parallel processing is activated to the rule of intra-picture prediction mode| GB2532420A|2014-11-18|2016-05-25|Sony Corp|Data encoding and decoding| US9591325B2|2015-01-27|2017-03-07|Microsoft Technology Licensing, Llc|Special case handling for merged chroma blocks in intra block copy prediction mode| EP3051818A1|2015-01-30|2016-08-03|Thomson Licensing|Method and device for decoding a color picture| US10057587B2|2015-01-31|2018-08-21|Qualcomm Incorporated|Coding escape pixels for palette mode coding| US9900624B2|2015-02-05|2018-02-20|Mediatek Inc.|Methods and apparatus of decoding process for palette syntax| EP3264771A4|2015-02-27|2018-08-29|KDDI Corporation|Coding device and decoding device| US10291932B2|2015-03-06|2019-05-14|Qualcomm Incorporated|Method and apparatus for low complexity quarter pel generation in motion search| WO2016145162A2|2015-03-12|2016-09-15|NGCodec Inc.|Intra-picture prediction processor with progressive block size computations and dual stage computations| CN107409208B|2015-03-27|2021-04-20|索尼公司|Image processing apparatus, image processing method, and computer-readable storage medium| MX2017014914A|2015-05-21|2018-06-13|Huawei Tech Co Ltd|Apparatus and method for video motion compensation.| CN106664405B|2015-06-09|2020-06-09|微软技术许可有限责任公司|Robust encoding/decoding of escape-coded pixels with palette mode| FR3038484B1|2015-07-01|2017-07-28|Ateme|IMAGE ENCODING METHOD AND EQUIPMENT FOR IMPLEMENTING THE METHOD| WO2017041271A1|2015-09-10|2017-03-16|Mediatek Singapore Pte. Ltd.|Efficient context modeling for coding a block of data| JP6599552B2|2015-09-10|2019-10-30|エルジーエレクトロニクスインコーポレイティド|Intra prediction method and apparatus in video coding system| US20180302629A1|2015-10-30|2018-10-18|Sony Corporation|Image processing apparatus and method| US20180332292A1|2015-11-18|2018-11-15|Mediatek Inc.|Method and apparatus for intra prediction mode using intra prediction filter in video and image compression| US10194170B2|2015-11-20|2019-01-29|Mediatek Inc.|Method and apparatus for video coding using filter coefficients determined based on pixel projection phase| US10659812B2|2015-11-24|2020-05-19|Samsung Electronics Co., Ltd.|Method and device for video decoding and method and device for video encoding| US10200719B2|2015-11-25|2019-02-05|Qualcomm Incorporated|Modification of transform coefficients for non-square transform units in video coding| KR102159252B1|2016-02-12|2020-10-14|후아웨이 테크놀러지 컴퍼니 리미티드|Method and apparatus for selecting the scan order| EP3220643A1|2016-03-14|2017-09-20|Thomson Licensing|Method and device for encoding at least one image unit, and method and device for decoding a stream representative of at least one image unit| US10567759B2|2016-03-21|2020-02-18|Qualcomm Incorporated|Using luma information for chroma prediction with separate luma-chroma framework in video coding| US20170332103A1|2016-05-13|2017-11-16|Intel Corporation|Interleaving luma and chroma coefficients to reduce the intra prediction loop dependency in video encoders and decoders| EP3466072A1|2016-05-27|2019-04-10|Sharp Kabushiki Kaisha|Systems and methods for varying quantization parameters| CN113727109A|2016-05-28|2021-11-30|联发科技股份有限公司|Method and apparatus for palette mode encoding and decoding of color video data| WO2017209455A2|2016-05-28|2017-12-07|세종대학교 산학협력단|Method and apparatus for encoding or decoding video signal| US11039147B2|2016-05-28|2021-06-15|Mediatek Inc.|Method and apparatus of palette mode coding for colour video data| US10687003B2|2016-08-04|2020-06-16|Omnivision Technologies, Inc.|Linear-logarithmic image sensor| US10368107B2|2016-08-15|2019-07-30|Qualcomm Incorporated|Intra video coding using a decoupled tree structure| US10326986B2|2016-08-15|2019-06-18|Qualcomm Incorporated|Intra video coding using a decoupled tree structure| US10652575B2|2016-09-15|2020-05-12|Qualcomm Incorporated|Linear model chroma intra prediction for video coding| KR20180031614A|2016-09-20|2018-03-28|주식회사 케이티|Method and apparatus for processing a video signal| CN109804628A|2016-09-30|2019-05-24|Lg 电子株式会社|Method and apparatus for block segmentation and intra prediction in image compiling system| US10911756B2|2016-10-28|2021-02-02|Electronics And Telecommunications Research Institute|Video encoding/decoding method and apparatus, and recording medium in which bit stream is stored| WO2018079888A1|2016-10-28|2018-05-03|엘지전자|Intra-prediction mode-based image processing method and apparatus for same| US10192295B2|2016-11-09|2019-01-29|AI Analysis, Inc.|Methods and systems for normalizing images| US10666937B2|2016-12-21|2020-05-26|Qualcomm Incorporated|Low-complexity sign prediction for video coding| EP3552393A1|2016-12-23|2019-10-16|Huawei Technologies Co., Ltd.|An encoding apparatus for signaling an extension directional intra-prediction mode within a set of directional intra-prediction modes| CN110115036B|2016-12-23|2021-09-03|华为技术有限公司|Intra-prediction device for removing directional intra-prediction modes from a set of predetermined directional intra-prediction modes| CN113784122A|2016-12-23|2021-12-10|华为技术有限公司|Intra-frame prediction device for expanding preset directional intra-frame prediction mode set| US11025903B2|2017-01-13|2021-06-01|Qualcomm Incorporated|Coding video data using derived chroma mode| US10939137B2|2017-04-28|2021-03-02|Sharp Kabushiki Kaisha|Image decoding device and image encoding device| CN108989820A|2017-06-03|2018-12-11|上海天荷电子信息有限公司|Each stage uses the data compression method and device of respectively corresponding chroma| WO2018237146A1|2017-06-21|2018-12-27|Vid Scale, Inc.|Adaptive quantization for 360-degree video coding| US10567772B2|2017-07-11|2020-02-18|Google Llc|Sub8×8 block processing| CN109274969B|2017-07-17|2020-12-22|华为技术有限公司|Method and apparatus for chroma prediction| CN107483934B|2017-08-17|2019-12-10|西安万像电子科技有限公司|Coding and decoding method, device and system| JP2021005741A|2017-09-14|2021-01-14|シャープ株式会社|Image coding device and image decoding device| RU2669874C1|2017-09-15|2018-10-16|Федеральное государственное унитарное предприятие "Государственный научно-исследовательский институт авиационных систем" |Methods and device for compression of images, method and device for restoration of images| WO2019073112A1|2017-10-09|2019-04-18|Nokia Technologies Oy|An apparatus, a method and a computer program for video coding and decoding| US10812798B2|2017-10-19|2020-10-20|Qualcomm Incorporated|Chroma quantization parameteroffset| US10368071B2|2017-11-03|2019-07-30|Arm Limited|Encoding data arrays| EP3490253A1|2017-11-23|2019-05-29|Thomson Licensing|Encoding and decoding methods and corresponding devices| CN108063947B|2017-12-14|2021-07-13|西北工业大学|Lossless reference frame compression method based on pixel texture| US10986349B2|2017-12-29|2021-04-20|Microsoft Technology Licensing, Llc|Constraints on locations of reference blocks for intra block copy prediction| US11069026B2|2018-03-02|2021-07-20|Mediatek Inc.|Method for processing projection-based frame that includes projection faces packed in cube-based projection layout with padding| US10922783B2|2018-03-02|2021-02-16|Mediatek Inc.|Cube-based projection method that applies different mapping functions to different square projection faces, different axes, and/or different locations of axis| JP6982525B2|2018-03-16|2021-12-17|Kddi株式会社|Video coding equipment and methods, decoding equipment and methods, and coding and decoding systems| US11259023B2|2018-04-12|2022-02-22|Qualcomm Incorporated|Harmonization of transform-based quantization and dynamic range adjustment scale derivation in video coding| EP3787287A1|2018-04-24|2021-03-03|Samsung Electronics Co., Ltd.|Video encoding method and device and video decoding method and device| US10949087B2|2018-05-15|2021-03-16|Samsung Electronics Co., Ltd.|Method for rapid reference object storage format for chroma subsampled images| WO2019234612A1|2018-06-05|2019-12-12|Beijing Bytedance Network Technology Co., Ltd.|Partition tree with four sub-blocks symmetric or asymmetric| WO2020013609A1|2018-07-11|2020-01-16|인텔렉추얼디스커버리 주식회사|Intra-frame prediction-based video coding method and device| US11265579B2|2018-08-01|2022-03-01|Comcast Cable Communications, Llc|Systems, methods, and apparatuses for video processing| JP2022501863A|2018-08-09|2022-01-06|オッポ広東移動通信有限公司Guangdong Oppo Mobile Telecommunications Corp., Ltd.|Video image component prediction methods, devices and computer storage media| CN112567743A|2018-08-15|2021-03-26|日本放送协会|Image encoding device, image decoding device, and program| WO2020060077A1|2018-09-20|2020-03-26|삼성전자 주식회사|Method and apparatus for video encoding, and method and apparatus for video decoding| US10469845B1|2018-09-21|2019-11-05|Tencent America, Llc|Method and apparatus for intra mode coding| CN112740276A|2018-09-23|2021-04-30|华为技术有限公司|Method and apparatus for intra reference sample interpolation filter switching| US11140404B2|2018-10-11|2021-10-05|Tencent America LLC|Method and apparatus for video coding| WO2020085955A1|2018-10-26|2020-04-30|Huawei Technologies Co., Ltd.|Method and apparatus for reference sample filtering| WO2020094154A1|2018-11-09|2020-05-14|Beijing Bytedance Network Technology Co., Ltd.|Improvements for region based adaptive loop filter| WO2020111843A1|2018-11-28|2020-06-04|주식회사 윌러스표준기술연구소|Video signal processing method and device using intra prediction filtering| EP3931745A1|2019-02-27|2022-01-05|Huawei Technologies Co., Ltd.|Adaptation parameter set types in video coding| EP3939286A1|2019-03-11|2022-01-19|Beijing Dajia Internet Information Technology Co., Ltd.|Coding of transform coefficients in video coding| US10764507B1|2019-04-18|2020-09-01|KneronCo., Ltd.|Image processing system capable of generating a snapshot image with high image quality by using a zero-shutter-lag snapshot operation| CN113785566A|2019-04-27|2021-12-10|韦勒斯标准与技术协会公司|Method and apparatus for processing video signal based on intra prediction| KR20210154991A|2019-06-11|2021-12-21|엘지전자 주식회사|Image decoding method for chroma component and apparatus therefor| WO2020251278A1|2019-06-11|2020-12-17|엘지전자 주식회사|Image decoding method based on chroma quantization parameter data, and apparatus therefor| WO2021040251A1|2019-08-23|2021-03-04|Samsung Electronics Co., Ltd.|Intra prediction method and device using the same, encoding and decoding method and device using the same based on intra prediction| WO2021051047A1|2019-09-14|2021-03-18|Bytedance Inc.|Chroma quantization parameter in video coding| WO2021061312A1|2019-09-23|2021-04-01|Alibaba Group Holding Limited|Filters for motion compensation interpolation with reference down-sampling| WO2021061018A1|2019-09-24|2021-04-01|Huawei Technologies Co., Ltd.|Method and apparatus of combined intra-inter prediction| WO2021108676A1|2019-11-27|2021-06-03|Beijing Dajia Internet Information Technology Co., Ltd|Deblocking filtering for video coding| WO2021138354A1|2019-12-30|2021-07-08|Beijing Dajia Internet Information Technology Co., Ltd.|Cross component determination of chroma and luma components of video data| US11197001B2|2020-02-05|2021-12-07|Tencent America LLC|Method and apparatus for interactions between decoder-side intra mode derivation and adaptive intra prediction modes|
法律状态:
2020-07-14| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]| 2020-08-18| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]| 2020-08-18| B15K| Others concerning applications: alteration of classification|Free format text: AS CLASSIFICACOES ANTERIORES ERAM: H04N 7/26 , H04N 7/36 Ipc: H04N 19/176 (2014.01), H04N 19/186 (2014.01), H04N | 2021-11-03| B350| Update of information on the portal [chapter 15.35 patent gazette]|
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 GB1207459.7A|GB2501535A|2012-04-26|2012-04-26|Chrominance Processing in High Efficiency Video Codecs| GB1207459.7|2012-04-26| GB1211072.2A|GB2501550A|2012-04-26|2012-06-22|Subsampling Interpolated Chrominance Prediction Units in Video Encoding and Decoding| GB1211073.0|2012-06-22| GB1211073.0A|GB2501551A|2012-04-26|2012-06-22|Partitioning Image Data into Coding, Prediction and Transform Units in 4:2:2 HEVC Video Data Encoding and Decoding| GB1211072.2|2012-06-22| PCT/GB2013/051076|WO2013160699A1|2012-04-26|2013-04-26|Generating subpixel values for different color sampling formats| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|