![]() INTER-PLANE PREDICTION
专利摘要:
inter-plane prediction. the invention relates to a better distortion rate where such rate is achieved by making interrelationships between encoding parameters of different planes available for exploration with the aim of reducing redundancy, despite the additional overhead resulting from the need to signal the information of inter-plane prediction for the decoder. in particular, the decision to use inter-plane prediction or not can be made by a plurality of planes individually. additionally or alternatively, the decision can be made based on blocks, considering a secondary plan. 公开号:BR112012026400B1 申请号:R112012026400-1 申请日:2010-04-13 公开日:2021-08-10 发明作者:Heiner Kirchhoffer;Martin Winken;Heiko Schwarz;Detlev Marpe;Thomas Wiegand 申请人:Ge Video Compression, Ll; IPC主号:
专利说明:
DESCRIPTION The present invention relates to coding schemes for different spatially sampled information components of an image of a scene, provided in planes, each plane comprising a matrix of information samples, as in videos or still images. In image and video encoding, particular images or sets of sample arrangements for the images are generally decomposed into blocks, which are associated with certain encoding parameters. Images usually consist of arrays of multiple samples. In addition, an image can also be associated with additional auxiliary sample arrays, which can, for example, specify information transparency or depth maps. The sample arrays of an image (including auxiliary sample arrays) can be grouped into one or more so-called plane groups, where each plane group consists of one or more sample arrays. The plane groups of an image can be coded independently, or, if the image is associated with more than one plane group, with the prediction of other plane groups of the same image. Each plan group is usually decomposed into blocks. The blocks (or the corresponding blocks of sample arrays), are predicted by an inter-picture prediction or intra-picture prediction. Blocks can be of different sizes and can be either square or rectangular. The partitioning of an image into blocks can either be fixed by syntax, or it can be (at least partially) signaled within the bitstream. Syntax elements are often passed in a way that signals subdivision to blocks of predefined sizes. Such syntax elements can specify whether and how a block is subdivided into smaller blocks and associated coding parameters, for example, for prediction purposes. For all samples of a block (or the corresponding blocks of sample arrays), the decoding of the associated encoding parameters is specified in a certain way. In the example, all samples in a block are predicted using the same set of prediction parameters, such as reference indices (identifying a reference image in the set of already coded images), motion parameters (specifying a measure for the block movement between a reference image and the current image), parameters for specifying filter interpolation, intra-prediction modes, etc. Motion parameters can be represented by displacement vectors with a horizontal and vertical component, or by means of higher order motion parameters such as related motion parameters consisting of six components. It is also possible that more than one set of particular prediction parameters (such as reference indices and motion parameters) are associated with a single block. In this case, for each set of these determined prediction parameters, an intermediate single prediction signal for the block (or the corresponding sample array blocks) is generated, and the final prediction signal is constructed by a combination including the overlapping signals. of intermediate prediction. The corresponding weight parameters and possibly also a displacement constant (which is added to the weight sum) can be fixed to an image, or a reference image, or a set of reference images, or can be included in the parameter set prediction for the corresponding block. The difference between the original blocks (or the corresponding sample array blocks) and their prediction signals, also referred to as the residual signal, is usually transformed and quantized. Often a two-dimensional transformation is applied to the residual signal (or the corresponding sample arrays of the residual block). For transformation encoding, blocks (or corresponding sample array blocks), for which a certain set of prediction parameters has been used, can be further divided, before applying the transformation. Transformation blocks can be equal to or smaller than the blocks that are used for the prediction. It is also possible for a transformation block to include more than one of the blocks that are used for the prediction. Different transform blocks can have different sizes, and transform blocks can represent square or rectangular blocks. After transformation, the resulting transformation coefficients are quantified and so-called transformation coefficient levels are obtained. Transformation coefficient levels as well as prediction parameters and, if present, subdivision information are encoded by entropy. In image and video coding standards, the possibilities of subdividing an image (or a group of planes) into blocks that are provided by syntax are very limited. Normally, it can only be specified if, and (potentially how) a block of a predefined size can be subdivided into smaller blocks. As an example, the size of the biggest block of H.264 is 16x16. The 16x16 blocks are also referred to as macroblocks and each image is divided into macroblocks in a first step. For each 16x16 macroblock, it can be signaled whether it is encoded as a 16x16 block, or as two 16x8 blocks, or as two 8x16 blocks, or as four 8x8 blocks. If a 16x16 block is subdivided into four 8x8 blocks, each of these 8x8 blocks can be coded either as one 8x8 block, or as two 8x4 blocks, or as two 4x8 blocks, or as four 4x4 blocks. The set of reduced possibilities for specifying block partitioning in prior art image and video coding standards has the advantage that the lateral information rate for informing subdivision signaling can be kept small, but it has the disadvantage that the bit rate required for transmitting the prediction parameters to the blocks can become significant, as explained below. The lateral information rate for signaling the prediction information does not generally represent a significant amount of the total bit rate of a block. And coding efficiency can be increased when said side information is reduced, which, for example, can be achieved through larger block sizes. Real images or images of a video sequence consist of arbitrarily shaped objects with specific properties. As an example, objects or parts of objects are characterized by a unique texture or a unique movement. And usually the same set of prediction parameters can be applied to such an object, or part of an object. But object boundaries generally do not match possible block boundaries for large prediction blocks (eg 16x16 macroblocks in H.264). An encoder generally determines the subdivision (among the limited set of possibilities) that results at least from a particular cost measure of distortion. For arbitrarily shaped objects, this can result in a large number of small blocks. And since each of these small blocks is associated with a set of prediction parameters that need to be transmitted, the lateral information rate can become a significant part of the total bit rate. However, since many of the small blocks still represent areas of the same object or part of an object, the prediction parameters for a certain number of blocks obtained are the same or very similar. That is, the subdivision or organization of an image into smaller portions or mosaics or blocks substantially influences coding efficiency and coding complexity. As described above, a subdivision of an image into a greater number of smaller blocks allows for a more spatially fine-tuned adjustment of the encoding parameters, in that it allows for better adaptability of the encoding parameters to the image/video material. On the other hand, setting the encoding parameters at a finer granularity puts a greater burden on the amount of side information needed in order to inform the decoder about the necessary settings. Furthermore, it should be noted that any freedom for the encoder to (further) subdivide the image/video into blocks spatially vastly increases the amount of possible encoding parameter settings and thus in general makes the search for parameter setting of encoding that leads to a better rate/distortion compromise even more difficult. It is an objective to provide a coding scheme for coding different information components spatially sampled from an image of a scene, provided in planes, each plane comprising an array of information samples, which allows to achieve a better index of distortion rate. This object is achieved by a decoder according to claim 1 or 11, an encoder according to the methods of claim 18 or 19, methods according to any one of claims 16, 17, 20 and 21, a computer program according to with claim 20 and a data stream as claimed in claim 24 and a data stream as claimed in claim 22 or 23. An idea underlying the present invention is that a better distortion rate index can be obtained when the interrelationships between the encoding parameters of different planes are made available for exploration purposes for the purpose of redundancy reduction, despite the overhead which results from the need for an interplane prediction information signal to the decoder. In particular, the decision to use inter-plane prediction or not can be made by a plurality of planes individually. Additionally or alternatively, the decision can be made based on blocks, considering a secondary plan. According to one embodiment, the array of information samples representing the spatially sampled information signal is spatially in root regions of the tree, then firstly subdivision, according to the multitree subdivision of information extracted from a stream. of data, at least a subset of tree root regions into smaller regions simply linked of different sizes by recursive multi-partioning of the tree root subset regions to allow finding a good compromise between a very fine subdivision and a very thick subdivision of the sense-rate of distortion, reasonable coding complexity, when the maximum region size of the tree root regions into which the information sample arrangement is spatially divided, is included within the data stream and is extracted from the data stream. data in the side decoding. Accordingly, a decoder may comprise an extraction device for extracting a maximum region size and multi-tree subdivision information from a data stream, a subdivider configured to spatially divide an array of information samples representing an information signal. spatially sampled in deep tree regions of the maximum size and subdivision region, according to the multi-tree subdivision information, at least a subset of tree root regions into smaller regions simply linked of different sizes by recursive multi-partioning subset of tree root regions, and a reconstructor configured to reconstruct the array of samples from the data information flow using subdivision into smaller simply linked regions. According to an embodiment of the present invention, the data stream also contains the maximum level of hierarchy up to which the subset of regions of the root of the tree is subjected to recursive multi-partioning. By this measure, signaling multi-tree subdivision information becomes easier and more accurate with fewer bits for encoding. In addition, the reconstructor can be configured to perform one or more of the following measures up to a granularity that depends on the intermediate subdivision: deciding which prediction mode to use between at least intra- and inter-prediction mode; transformation from spectral to spatial domain, realization and/or definition of parameters for an inter-prediction; performing and/or defining the parameters for an intra-prediction. In addition, the extractor can be configured to extract the syntax elements associated with the leaf regions of the partitioned tree blocks in a pass-depth order from the data stream. By this measure, the extractor is able to exploit statistics from the already encoded syntax elements of neighboring leaf regions with a higher probability of using a first amplitude transverse order. According to another embodiment, another subdivider is used to subdivide, according to yet another information of the multi-tree subdivision, at least a subset of smaller simply linked regions into smaller, even simply linked, regions. The first subdivision phase can be used by the reconstructor to perform the prediction of the sample information area, while the second subdivision phase can be used by the reconstructor to perform the retransformation from the spectral to the spatial domain. Defining the residual subdivision to be relatively subordinate to the subdivision prediction makes the coding of the total subdivision a little less time-consuming and, on the other hand, the restriction and freedom for the residual subdivision resulting from the subordination has merely minor negative effects on the coding efficiency since most of your image parts with similar motion compensation parameters are larger than similar parts with spectral properties. According to yet another embodiment, another region of maximum size is contained in the data stream, the other maximum size region defining the size of the tree root subregions into which the simply linked smaller regions are first divided before subdividing at least a subset of the tree root of subregions according to other multi-tree subdivision information into smaller simply linked regions. This, in turn, allows an independent adjustment of the maximum dimensions of the prediction region of the subdivision on the one hand and the residual subdivision on the other hand, and therefore allows to find a better rate/distortion compromise. According to yet another embodiment of the present invention, the data stream comprises a first subset of syntax elements disjoint from a second subset of syntax elements forming the multi-tree subdivision information, wherein a concentration on the side of decoding is able to combine, according to the first subset of syntax elements, simply spatially linked small neighbor regions of the multi-tree subdivision to obtain an intermediate subdivision of the sample array. The reconstructor can be configured to reconstruct the array of samples using intermediate subdivision. By this measure, it is easier for the encoder to adapt effective subdivision to the spatial distribution of the properties of the information sampling array to find an optimal velocity/distortion compromise. For example, if the maximum region size is large, the multi-tree subdivision information is likely to become more complex due to the tree root regions becoming larger. On the other hand, however, if the maximum region size is small, it becomes more likely that neighboring tree root regions belong to information content with similar properties, so these tree root regions may also have been processed together. Fusion fills this gap between the aforementioned extremes, which allows for an almost ideal subdivision of granularity. From the encoder's point of view, the syntax elements that merge to allow a more relaxed or computationally less complex encoding procedure, since the encoder wrongly uses a very fine subdivision, this error can be compensated by the encoder later, subsequently, by configuring the joining of syntax elements with or without adaptation of only a small part of the syntax elements having been fixed before defining the merging syntax elements. According to yet another embodiment, the size of the region; maximum and multi-tree subdivision information is used for residual subdivision instead of subdivision prediction. The first transverse order depth for treating the simply linked regions of a quadtree subdivision of an array of samples representing a spatially sampled information information signal is used according to a modality rather than a transverse first order width. Through the use of depth transverse order, each simply linked region has a greater probability of having simply linked neighboring regions that have already been traversed so that information about these simply linked neighboring regions can be positively exploited for region reconstruction. respective current simply turned on. When the array of information samples is first divided into a regular array of zero order hierarchy size tree root regions then it subdivides at least a subset of tree root regions into smaller simply linked regions of different sizes, the reconstructor can use a zigzag sweep to check the tree root regions where, for each tree root region to be parceled, the simply linked leaf regions are treated in transverse order of depth before moving on to the next tree root region in zigzag sweep order. In addition, the leaf regions according to the simply linked depth transverse order of the same hierarchy level can also be traversed by a zigzag scan order. Thus, the increased probability of having simply linked neighboring leaf regions is maintained. According to one embodiment, although the flags associated with the nodes of the multi-tree structure are sequentially arranged in a first transversal order of depth, the sequential encoding of the flags uses probability estimation contexts that are the same for the flags associated with the nodes of the multi-tree structure located at the same hierarchy level as the multi-tree structure, but different for nodes of the multi-tree structure located within different hierarchy levels of the multi-tree structure, thus allowing a good compromise between the number of contexts to be provided and the adaptation to the current symbol statistics of the flags on the other hand. According to one embodiment, the estimation probability contexts for a predetermined flag used also depend on the flags preceding the predetermined flag according to the depth transverse order and corresponding to areas of the tree root region which has a predetermined location relationship to the area to which the predetermined flag corresponds. Similar to the idea underlying the aspect of the process, the use of transverse depth order ensures a high probability that flags that have already been encoded also comprise flags corresponding to the neighboring zones of the area corresponding to the predetermined flag so that this knowledge can be used to better adapt the context to be used for the predetermined flag. The flags that can be used to establish the context of a predetermined flag may be those corresponding to the rest areas to the top and/or the left of the area to which the predetermined flag corresponds. Furthermore, the flags used for context selection can be restricted to flags belonging to the same hierarchy level as the node with which the predetermined flag is associated. According to an embodiment, the encoded signaling comprises an indication of a higher hierarchy level and a sequence of flags associated with the nodes of the multi-tree structure of inequality to the highest hierarchy level, each flag indicating whether the associated node is an intermediate node or descending node, and a sequential decoding, in a first transverse order of depth or width, of the sequence of data stream flags takes place with interleaved nodes of the highest hierarchy level and automatically indicating the same leaf nodes, thus reducing the encoding rate. According to another embodiment, the multi-tree structure encoded signaling may comprise the indication of the highest hierarchy level. By this measure, it is possible to restrict the existence of flags from hierarchy levels other than the highest hierarchy level as other higher hierarchy level block paring is excluded anyway. In the case of the spatial multi-tree subdivision being part of a secondary subdivision of leaf nodes and non-partitioned tree root regions of a primary multi-tree subdivision, the context used to encode the flags of the secondary subdivision can be selected so that the contexts are the same for the flags associated with areas of the same size. According to other embodiments, a favorable merging or grouping of simply linked regions into which the array of information samples is subdivided is encoded with a reduced number of data. To this end, for the simply linked regions, a predetermined location relative relationship is defined allowing an identification, for a simply linked predetermined region, of simply linked regions within the plurality of predetermined simply linked regions having the relative relationship of predetermined location to the simply linked region. That is, if the number equals zero, a merge indicator for the predetermined region simply turned on may be missing from the data stream. Furthermore, if the number of simply linked regions which have the predetermined location relationship to the simply linked predetermined region is one, the coding parameters of the simply linked region can be passed or can be used for prediction of the encoding parameters for the predetermined region simply linked without the need for any additional syntax elements. Otherwise, that is, if the number of simply linked regions that have the predetermined location relation to the simply linked predetermined regions is greater than one, the introduction of an additional syntax element can be suppressed, even if the encoding parameters associated with these simply linked identified regions are identical to each other. According to one embodiment, if the encoding parameters of the simply linked neighboring regions are unequal to each other, a neighboring reference identifier may identify an appropriate subset of the number of simply linked regions with the predetermined relationship to the location of the region. pre-determined simply turned on and this appropriate subset is used when adopting the encoding parameters, or prediction of the encoding parameters of the predetermined region simply turned on. According to still other embodiments, a spatial subdivision of an area of samples representing a spatial sampling of the two-dimensional information signal into a plurality of simply linked regions of different sizes by recursive multi-partitioning is performed according to a first subset of syntax elements contained in the data stream, followed by a combination of simply linked spatially neighboring regions, depending on a second subset of syntax elements within the data stream being decoupled from the first subset, to obtain an intermediate subset of the array of samples into disjoint sets of simply linked regions, whose union is the plurality of simply linked regions. Intermediate subdivision is used in reconstructing the sample array from the data stream. This allows for an optimization with respect to the subdivision making it less critical due to the fact that a very fine subdivision can be compensated for by merging later. Furthermore, the combination of subdivision allows the realization of the merging of intermediate subdivisions that would not be possible through recursive multi-partioning only so that the concatenation of the subdivision and merging with the use of disjoint sets of syntax elements allows a better adaptation from the effective intermediate subdivision or to the actual content of the two-dimensional information signal. Compared to the advantages, the additional overhead resulting from additional subset of syntax elements to indicate the merging details is negligible. Preferred embodiments of the present invention are described below with respect to the following figures, among which: Fig. 1 shows a block diagram of an encoder according to an embodiment of the present application; Fig. 2 shows a block diagram of a decoder according to an embodiment of the present application; Figs. 3a-c schematically show an illustrative example of a quadtree subdivision, in which Fig. 3a shows a first hierarchy level, Fig. 3b shows a second hierarchical level and Fig. 3c shows a third hierarchy level; Fig. 4 schematically shows a tree structure for the illustrative quadtree subdivision of Figs. 3a to 3c according to an embodiment; Figs. 5a,b schematically illustrate the quadtree subdivision of Figs. 3a to 3c and the tree structure with indexes that index the individual leaf blocks; Figs. 6a,b schematically show binary strings or flag sequences representing the tree structure of the figure. 4 and the quadtree subdivision of fig. 3a to 3c, respectively, according to different embodiments; Fig. 7 shows a flowchart showing the steps performed by an extractor data stream according to an embodiment; Fig. 8 shows a flowchart illustrating the functionality of an extractor data stream according to another embodiment; Fig. 9a,b show schematic diagrams illustrating quadtree subdivisions with neighboring candidate blocks for a predetermined block to be detached, according to an embodiment; Fig. 10 shows a flowchart of a feature of an extractor data stream according to another embodiment; Fig. 11 schematically shows an image composition of plane planes and plane groups and illustrates a coding using inter-plane adaptation/prediction according to an embodiment; Fig. 12a and 12b schematically illustrate a sub-tree structure and the corresponding subdivision in order to illustrate the inheritance scheme according to an embodiment; Fig. 12c and 12d schematically illustrate a sub-tree structure in order to illustrate the inheritance scheme with approval and prediction, respectively, according to embodiments; Fig. 13 shows a flowchart showing the steps performed by an encoder implementing an inheritance scheme according to an embodiment; Fig. 14a and 14b show a primary subdivision and a subordinate subdivision to illustrate a possibility of implementing an inheritance scheme in connection with interprediction according to an embodiment; Fig. 15shows a block diagram illustrating a decoding process in connection with the inheritance scheme according to an embodiment; Fig. 16 shows a schematic diagram illustrating the digitization order between the subregions of a multitree subdivision according to an embodiment, with the subregions being the object of intraprediction; Fig. 17 shows a block diagram of a decoder according to an embodiment; Fig. 18-c shows schematic diagrams illustrating different possibilities of subdivisions according to other embodiments; In the following description of the figs. elements occurring in several of these figs are indicated by common reference numerals and a repeated explanation of these elements is avoided. Rather, explanations in relation to an element shown within a fig. it applies equally to other figs, in which the respective element occurs while the explanation presented with those other figs, indicates deviations thereof. Furthermore, the description below starts with embodiments of an encoder and decoder, which are explained in relation to Figs. 1 to 11. The embodiments described in relation to these figures combine various aspects of the present patent application which, however, would also be advantageous if implemented individually within a coding scheme and therefore, with reference to the subsequent Figs. the embodiments which are briefly discussed to explore aspects just mentioned individually with each of these embodiments representing an abstraction of the embodiments described in relation to Figs. 1 and 11 in a different sense. Fig. 1 shows an encoder according to an embodiment of the present invention. The encoder 10 of Fig. 1 comprises a predictor 12, a residual precoder 14, a residual reconstructor 16, an insert data stream 18 and a block divider 20. The encoder 10 is for encoding a temporal information signal spatially collected in a data stream 22. The spatially sampled temporal information signal can be, for example, a video, i.e. a sequence of images. Each image represents an array of image samples. Other examples of temporal spatial signal information comprise, for example, images captured by depth, for example light time cameras. Furthermore, it should be noted that a spatially sampled information signal may comprise more than one array per frame or time stamp, such as in the case of a color video, which comprises, for example, an array of luminance samples, along with two arrays of chrominance samples per image. It may also be possible that the temporal sampling rate for the different components of the information signal, i.e. luminance and chrominance, may be different. The same applies for spatial resolution. A video can also be accompanied by spatially sampled information, such as depth or transparency information. The following description, however, will focus on transforming one of these arrangements for the sake of a better understanding of the main aspects of this application for the first time with then turning to handling more than one plan. . The encoder 10 of Fig. 1 is configured to create the data stream 22 so that the syntax elements of the data stream 22 describe the images in a granularity lying between entire images and the individual image samples. To this end, the divider 20 is configured to subdivide each image into simply connected regions 24 of different sizes 26. In the following these regions will simply be called blocks or subregions 26. As will be described in more detail below, the divider 20 uses a multi-tree subdivision system in order to subdivide the image 24 into blocks 26 of different sizes. To be even more precise, the specific embodiments described below in relation to Figs. 1-11 mainly use a quadtree subdivision. As will also be explained in more detail below, the divider 20 may internally comprise a concatenation of a sub-divider 28 to subdivide the images 24 into the aforementioned blocks 26 followed by a merge 30 which allows the combining groups of these blocks 26, in order to obtain an effective subdivision or granularity, which lies between the non-subdivision of images 24 and the subdivision defined by the subdivider 28. As illustrated by the dashed lines in fig. 1, indicator 12, residual precoder 14, residual reconstructor 16 and data stream inserter 18 operate on the image of subdivisions defined by divider 20. For example, as will be described in more detail below, predictor 12 uses a prediction of the subdivision defined by the divisor 20, in order to determine for individual subregions of the subdivision prediction as to whether the respective subregion is subjected to intra-picture prediction or inter-picture prediction the setting of the corresponding prediction parameters for the respective subregion , according to the selected prediction mode. The residual precoder 14, in turn, can use a residual subdivision of the images 24 in order to encode the prediction residue of the images 24 provided by the predictor 12. As the residual reconstructor 16 reconstructs the production residue by syntax elements by the residual precoder 14, the residual reconstructor 16 also operates on the residual subdivision just mentioned. The data stream inserter 18 can explore only the mentioned divisions, that is, the prediction and residual subdivisions, in order to determine the insertion order and neighborhood between the syntax elements for inserting the output by syntax elements by the pre. - residual encoder 14 and predictor 12 into data stream 22 by means of e.g. entropy coding. As shown in fig. 1, encoder 10 includes an input 32 where the original information signal enters encoder 10. A subextractor 34, residual precoder 14 and data stream inserter 18 are connected in series in the order mentioned between the input. 32 and the output of insert data stream 18 of which coded data stream 22 is the output. The subextractor 34 and the residual precoder 14 are part of a prediction circuit which is closed by the residual constructor 16, an adder 36 and a predictor 12 which are connected in series in the order mentioned between the output of the residual precoder 14 and to the inverting input of the subextractor 34. The output of the predictor 12 is also connected to the other input of the adder 36. In addition, the indicator 12 comprises an input directly connected to the input 32 and may comprise another input, also connected to the output of the adder 36 through of an option in the filter circuit 38. In addition, the predictor 12 generates side information during operation and therefore an output of the predictor 12 is also coupled to the stream data input device 18. Likewise, the divider 20 comprises an output which is connected to another input of insert data stream 18. Having described the structure of encoder 10, the mode of operation is described in more detail below. As described above, the divider 20 decides for each image 24 how to subdivide it into 26 subregions. According to a subdivision of the image 24 to be used for the prediction, the predictor 12 decides for each subregion corresponding to this subdivision, how to predict the respective subregion. Predictor 12 produces the subregion prediction for the inverting input of subextractor 34 and for the other input of adder 36 and produces the prediction information that reflects how the predictor 12 obtained therefrom predicts previously encoded portions of video to the inserter of the data stream 18. At the output of the subextractor 34, the residual prediction is thus obtained, whereby the residual precoder 14 processes this residual prediction in accordance with a residual subdivision also prescribed by the divider 20. As described in more detail below with respect to Figs. 3 to 10, the residual image subdivision 24 used by the residual precoder 14 can be related to the prediction subdivision used by the predictor 12, so that each prediction subregion is adopted as a residual subregion or further subdivided into smaller residual subregions. However, fully independent prediction and residual subdivisions would also be possible. Residual precoder 14 subjects each residual subregion to a spatial to spectral domain transformation by a two-dimensional transformer followed by, or inherently involving, a quantization of transformation coefficients resulting from resulting transformation blocks where distortion results from quantization noise. The data stream inserter 18 can, for example, encode losslessly the syntax elements describing above-mentioned transformation coefficients in the data stream 22 by using, for example, entropy encoding. The residual reconstructor 16, in turn, reconverts, by using a requantization followed by a retransformation, the transformation coefficients into a residual signal, in which the residual signal is combined with the adder 36 with the prediction used by the subextractor. 34 to obtain the residual prediction, thus obtaining a reconstructed part or sub-region of a current image at the output of the adder 36. Predictor 12 can use the image of the reconstructed sub-region for the intra-prediction directly, which is to predict a certain prediction of the subregion by extrapolation from the previously reconstructed neighboring subregions. However, an intra-prediction performed with the spectral domain predicting the spectrum of the current sub-region directly from a neighbor would theoretically also be possible. For inter-prediction, predictor 12 may use previously encoded images and reconstructed in a version whereby they have been filtered by an optional ring filter 38. The ring filter 38 may, for example, comprise a blocking filter or a adaptive filter having a transfer function adapted to advantageously form the aforementioned quantization noise. The predictor 12 chooses prediction parameters that reveal how to predict a given prediction of the sub-region by using a comparison with the original samples within the image 24. The prediction parameters can, as described in more detail below, comprises, for each sub-region of the prediction, the indication of the prediction mode, such as the intra prediction prediction and the image inter prediction. In the case of image intraprediction, the prediction parameters may also include an indication of an angle along which the edges of the subregion to be predicted in the intraprediction form mainly extends, and in the case of interprediction. image, motion vectors, moving image indices and possibly higher order motion transformation parameters and, in the case of intra and/or inter image prediction, optional filter information for filtering the reconstructed image samples based on which the prediction of the current subregion is predicted. As will be described in more detail below, said subdivisions defined by a divider 20 substantially influence the maximum rate/distortion ratio achievable by residual precoder 14, 12 and the insert data stream predictor 18. In the case of a subdivision very fine, the prediction parameters of the output 40 by predictor 12 to be inserted into the data stream 22 need a very large encoding rate although the prediction obtained by predictor 12 may be better and the residual signal to be encoded by the precoder residual 14 can be smaller, so that it can be encoded by fewer bits. In the case of a very thick subdivision, the opposite applies. Furthermore, just the mentioned thinking also applies for residual subdivision in a similar way: transforming an image using a finer granularity of the individual transform blocks leads to less complexity for calculating transforms and a higher spatial resolution of the transform. resulting. That is, smaller residual subregions allow the spectral distribution of content within each residual subregion to be more consistent. However, the spectral resolution is reduced and the ratio between significant and insignificant, that is, quantized to zero, the coefficients worsen. That is, the transformation granularity must be adapted to the image content locally. Furthermore, regardless of the positive effect of a granularity finder, a finer granularity regularly increases the amount of lateral information needed in order to indicate the chosen subdivision for the decoder. As will be described in more detail below, the embodiments described below provide the encoder 10 with the ability to adapt the subdivisions very efficiently to the content of the information signal to be encoded and to signal the subdivisions to be used for the side decoding to instruct the data stream inserter 18 to insert the subdivision information for the encoded data stream 22. Details are presented below. However, before defining the subdivision of divider 20 in more detail, a decoder according to an embodiment of the present patent application is described in more detail with reference to fig. two. The decoder of FIG. 2 is indicated by reference signal 100 and comprises an extractor 102, a divider 104, a residual reconstructor 106, an adder 108, an indicator 110, an optional loop filter 112 and an optional post-filter 114. The extractor 102 receives the flow from encoded data at an input 116 of the decoder 100 and extracts from the subdivision information of the encoded data stream 118, the prediction parameters 120 and residual data 122 that the extractor processes 102 to the image divider 104, predictor 110 and residual reconstructor 106 , respectively. Residual reconstructor 106 has an output connected to a first input of adder 108. The other input of adder 108 and its outputs are connected to a prediction circuit to which optional loop filter 112 and predictor 110 are connected in series. in the mentioned order with a through route leading from the output of the adder 108 to the predictor 110 in a manner directly similar to the aforementioned connections between the adder 36 and the predictor 12 in Fig. 1, i.e. one for the prediction of intra image and the other for inter-image prediction. Either the output of driver 108 or the output of loop filter 112 can be connected to an output 124 of decoder 100, where the reconstructed information signal is sent to a reproduction device, for example. An optional post-filter 114 can be connected within the path leading to the output 124 in order to improve the visual quality of the visual impression of the reconstructed signal at the output 124. Generally speaking, residual reconstructor 106, adder 108 and predictor 110 act as elements 16, 36 and 12 in FIG. 1. In other words, it even emulates the functioning of the elements already mentioned in fig. 1. To this end, the residual reconstructor 106 and the predictor 110 are controlled by the prediction parameters 120 and the subdivision prescribed by the image divider 104 according to the information of the subdivision 118 from the extractor 102, respectively, in order to predict predicting subregions in the same way as predictor 12 did or decided to do, and in order to retransform the received transform coefficients in the same granularity as residual precoder 14 did. The image divider 104, in turn, reconstitutes the subdivisions chosen by the divider 20 of FIG. 1 in a synchronous manner, based on the information from the subdivision 118. The extractor can use, in turn, the information from the subdivision, in order to control the extraction of data, such as, in terms of context selection, the determination neighborhood, probability estimation, analyze data flow syntax, etc. Various deviations can be made from the above mentioned embodiments. Some are mentioned in the detailed description below in relation to the subdivision made by the subdivider 28 and the fusion carried out by fusion 30 and others are described in relation to Figs. 12 to 16 subsequent. In the absence of any obstacles, all these deviations can be applied, individually or in subsets to the above-mentioned description of Fig. 1 and Fig. 2, respectively. For example, dividers 20 and 104 cannot determine a prediction subdivision and a residual subdivision by image only. Instead, they can also determine a subdivision for optional ring filter 38 and 112, respectively, either independent of or dependent on the other subdivision for predictive or residual coding, respectively. Furthermore, the determination of the subdivision or subdivisions by these elements cannot be performed on an image by image basis. Instead, a given subdivision or subdivisions for a given image can be reused or adopted for a given sequence number of images only then transferring a new subdivision. In providing more details about the division of images into subregions, the following description focuses first on the part of the subdivision where the subdivider 28 and 104a takes responsibility. Next, the fuser process where fuser-30 and fuser 104b take responsibility is described. Finally, the interplane adaptation/prediction is described. The way in which the subdivider 28 and 104 divide the figures is such that an image is divided into a number of blocks of different sizes, possibly for the purpose of predictive and residual coding of the image or video data. As mentioned earlier, an image 24 may be available as one or more arrays of image sample values. In the case of the YUV / YCbCr color space, for example, the first array can represent the luminance channel, while the other two arrays represent the chrominance channels. These arrays can have different dimensions. All arrangements can be grouped into one or more plan groups, with each plan group consisting of one or more consecutive plans where each plan is contained in one and only one plan group. For each plan group the following applies. The first arrangement of a particular plane group can be called the main arrangement of this plane group. The next possible arrangements are subordinate arrangements. Block splitting of the primary array can be done based on a quadtree approach as described below. The block division of the subordinate arrays can be derived based on the division of the main set. In accordance with the embodiments described below, the subdividers 28 and 104a are configured to divide the main array into a number of square blocks of equal size, the so-called tree blocks below. The edge length of tree blocks is typically a power of two, such as 16, 32, or 64, when quadtrees are used. In addition, however, it should be noted that the use of other types of trees would be possible, as would binary trees or trees with any number of leaves. Also, the descending tree number can be varied depending on the tree level and depending on what the tree sign represents. Furthermore, as mentioned above, the sample arrangement can also represent other video sequence information, such as depth maps or light fields, respectively. For simplicity, the following description focuses on quadtrees as a representative example for multi-trees. Quadtrees are trees that have exactly four descendants on each internal node. Each of the tree blocks constitutes a primary quadtree along with subordinate quadtrees on each of the leaves of the primary quadtree. The primary quadtree determines the subdivision of a given for tree block prediction while a subordinate quadtree determines the subdivision of a prediction block made for the purpose of residual encoding. The root node of the primary quadtree corresponds to the complete tree block. For example, fig. 3a shows a tree block 150. It should be remembered that each image is divided into a regular grid of rows and columns of tree blocks 150 so that it, for example, covers the range of samples without gaps. However, it should be noted that for all block subdivisions shown below, constant subdivision without overlapping is not critical. Instead, the neighboring block may overlap each other, as long as no block of sheets is a suitable subportion of a neighboring block of sheets. Along the quadtree structure for tree block 150, each node can be further divided into four descendant nodes, which in the case of the primary quadtree means that each tree block 150 can be divided into four subblocks of half width and half the height of the tree block 150. In fig. 3a, these sub-blocks are indicated with reference numerals 152a to 152d. Likewise, each of these sub-blocks can be further divided into four smaller sub-blocks, half the width and half the height of the first sub-blocks. In fig. 3d this is exemplary shown for sub-block 152c, which is subdivided into four small sub-blocks 154a to 154d. Insofar as Figs. 3a to 3c show how an exemplary tree block 150 is first divided into its four subblocks 152a to 152d, then the lower left subblock 152c is further divided into four small subblocks 154a to 154d, and finally as shown in Fig. 3c, the upper right block 154b of these small sub-blocks is again divided into four blocks of a width and height of the original eighth tree block of 150, with these blocks smaller until being denoted 156a to 156d. Fig. 4 shows the base tree structure for the exemplary quadtree division, as shown in Figs. 3a-3d. The numbers beside the nodes of the tree are the values of a so-called subdivision flag, which will be explained in detail later when the quadtree structure flagging is discussed. The root node of the quadtree is represented at the top of the figure (labeled "Level 0"). The four branches at level 1 of this root node correspond to the four sub-blocks, as shown in the figure. 3rd. As the third of these sub-blocks is further subdivided into its four sub-blocks, in fig. 3b, the third node at level 1 in Fig.4 also has four branches. Again, the corresponding to the subdivision of the second (top right) descending node in fig. 3c, there are four sub-branches related to the second node at level 2 of the quadtree hierarchy. Nodes at level 3 are no longer subdivided. Each leaf of the primary quadtree corresponds to a block of variable size for which individual prediction parameters can be specified (ie intra or inter prediction mode, motion parameters, etc.) Hereinafter these blocks are called blocks of prediction. In particular, these blocks of leaves are the blocks shown in Fig. 3c. With brief reference prior to the description of the Figures. 1 and 2, the divider 20 or subdivider 28 determines the quadtree subdivision as just explained. The subdivisor 152a-d makes the decision that of tree blocks 150, subblocks 152a-d, small subblocks 154a-and so on, for further subdivision or partition, in order to find an optimal trade-off between a prediction of very fine subdivision and a very thick subdivision forecast, as already indicated above. Predictor 12, in turn, uses the prescribed subdivision prediction in order to determine the prediction parameters mentioned above at a granularity depending on the subdivision prediction or for each of the prediction subregions represented by the blocks shown in Fig. 3c, for example. The prediction blocks shown in fig. 3c can be further divided into smaller blocks for the purpose of residual coding. For each prediction block, that is, for each leaf node of the primary quadtree, the corresponding subdivision is determined by one or more subordinate quadtree(s) for residual encoding. For example, by allowing maximum block size of 16x16 residues, a 32x32 prediction block can be divided into four 16x16 blocks, each of which is determined by a subordinate quadtree for residual encoding. Each 16x16 block in this example corresponds to the root node of a subordinate quadtree. As described for the case of subdivision of a data block tree into prediction blocks, each prediction block can be divided into a certain number of residual blocks by using subordinate quadtree composite(s). Each leaf of a subordinate quadtree 5 corresponds to a residual block of which individual residual encoding parameters can be specified (i.e. transform mode, transform coefficients, etc.) by the residual precoder 14 encoding the control parameters, by in turn, residual reconstructors 16 and 106 , respectively. In other words, the subdivider 28 can be configured to determine, for each image, or for each group of images, a prediction of subdivision of a subordinate and residual subdivision first, dividing the image into a regular arrangement of tree blocks. 150, a recursive partitioning of the subset of these tree blocks by quadtree subdivision, in order to obtain the prediction of subdivision into predictive blocks - which can be tree blocks if no separation occurred in the respective tree block, or the leaf blocks of the quadtree subdivision - with then yet another subdivision of a subset of these prediction blocks, in a similar manner, by, if a prediction block is larger than the maximum size of the subordinate residual subdivision, first, separate the prediction block corresponding to a regular arrangement of tree sub-blocks with then sub-division of a subset of these sub-tree blocks in accordance with the procedure of quadtree subdivision in order to obtain the residual blocks - which can be prediction blocks, if not split into tree subblocks took place in the respective prediction block tree subblocks if no division into even smaller regions took place in the respective sub-blocks, or the leaf blocks of the residual quadtree subdivision. As briefly described above, the sub-divisions chosen for a primary arrangement can be mapped onto subordinate arrangements. This is easy when considering subordinate arrays of the same dimension as the primary array. However, special measures must be taken when the dimensions of the subordinate arrays differ from the dimension of the main array. Generally speaking, the mapping of the main arrangement of the subdivision to the subordinate arrangements in case of different dimensions can be done by spatial mapping, that is, by spatial mapping of the block boundaries of the main arrangement of the subdivision to the subordinate arrangements. In particular, for each subordinate array, there may be a scale factor in the horizontal and vertical directions, which determines the relationship between the dimension of the master array to the subordinate array. The division of the subordinate array into predictive and residual coding sub-blocks can be determined by the primary quadtree and the subordinate quadtree(s) of each of the co-installed tree blocks of the main array, respectively, with the resulting tree blocks of the subordinate array being scaled by the relative scale factor. In the case of scaling factors in different horizontal and vertical directions (for example, as in 4:2:2 chrominance sub-sampling), the resulting prediction and residual blocks of the subordinate array would no longer be squared anymore. In this case, it is possible to predetermine, that is, to adaptively select (either for the entire sequence, an out-of-sequence image or for every single or residual prediction) whether the residual non-square block should be divided into square blocks. In the first case, for example, the encoder and decoder may agree to subdivide into square blocks at a time where a mapped block is not squared. In the second case, the sub-divider 28 can indicate the data stream by means of insert selection 18 and 22 of the data stream of the sub-divider 104a. For example, in the case of 4:2:2 chrominance subsampling, where the subordinate arrays are half as wide but the same height as the main array, the residual blocks would be twice as wide. Through vertical division, this block could get two square blocks again. As mentioned above, the sub-divider 28 or divider 20, respectively, signals the quadtree division through the data stream 22 to the sub-divider 104a. To this end, the subsplitter 28 informs the insert stream data 18 about the subdivisions chosen for images 24. The data stream inserter, in turn, transmits the structure of the primary and secondary quadtree and therefore the division of the picture arrangement in variable size blocks of prediction or encoding residues within the bitstream data stream or 22, respectively, for side decoding. The minimum and maximum allowable block sizes are transmitted as side information and can change from image to image. Or the minimum and maximum allowable block sizes can be fixed on the encoder and decoder. These minimum and maximum block sizes can be different for prediction and residual blocks. For the signaling of the quadtree structure, the quadtree has to be traversed and for each node it has to be specified whether this particular node is a quadtree leaf node (ie the corresponding block is not further subdivided), or, whether it is branched on its four descendant nodes (that is, the corresponding block is divided into four subblocks of half the size). Flagging within an image is done by tree block in a raster scan order such as left to right and top to bottom as illustrated in fig. 5a to 140. This scan order can also be different, such as bottom right side up, left side or checkerboard direction. In a preferred embodiment, each tree block and therefore each quadtree is traversed in first order depth for signaling the subdivision information. In a preferred embodiment, not only the subdivision information, i.e. the tree structure, but also the prediction data, etc., i.e. the payload associated with the leaf nodes of the tree, is transmitted / processed in depth first order. This is done because the first transverse order of depth has great advantages over the first order of width. In fig. 5b, a quadtree structure is presented with the leaf nodes marked a, b j. Fig. 5a shows the resulting block division. If the blocks/nodes of the leaf are traversed in first order of width, we get the following order: abjchidefg. In depth order, however, the order is abc...ij. As can be seen from fig. 5a, in the first depth order, the left neighbor block and the upper neighbor block is always transmitted/processed before the current block. Thus, motion vector prediction and context modeling can always use the specified parameters for the left and top neighbor block in order to get better coding performance. For first-order width, this will not be the case, since block j is transmitted before blocks e, g, and i, for example. Therefore, the signaling for each tree block is done recursively along the quadtree structure of the primary quadtree in such a way that for each node a signal is transmitted, indicating whether the corresponding block is divided into four subblocks. If this flag has the value "1" (for "true"), then this flagging process is repeated recursively for all four descending nodes, ie the sub-blocks in raster scan order (top left, top right , lower left, lower right) until the leaf node of the primary quadtree is reached. Note that a leaf node is characterized by having a subdivision flag with a value of "0". For the case where a node resides in the lowest hierarchy level of the primary quadtree and therefore corresponds to the smallest allowable prediction block size, no subdivision flags have to be transmitted. For the example in fig. 3a-c, first a transmission of "1" as shown at 190 in fig. 6a, specifying that the tree block 150 is divided into its four sub-blocks 152a-d. Then, one could recursively encode the subdivision information of all four subblocks 152a-d in raster check order 200. For the first two subblocks 152a, ba would transmit "0", indicating that it is not find sub-divided (see 202 in Figure 6a). For the third sub-block 152c (lower left), a "1" can be transmitted, indicating that this block is sub-divided (see 204 in Fig. 6a). Now, according to the recursive approach, the four sub-blocks 154a-d of this block would be processed. Here, one can transmit "0" for the first sub-block (206) and "1" for the second (top right) sub-block (208). Now the four blocks of smaller block size 156a-d in fig. 3c would be processed. In case it has already reached the smallest block size allowed in this example, there is more data that would have to be transmitted, since further subdivision is not possible. Otherwise, "0000", indicating that none of these blocks are split, would be transmitted as indicated in figs. 6a to 210. After that, "00" would be transmitted for the two lower blocks in fig. 3b (see 212 in fig. 6a), and finally "0" for the lower right corner block in fig. 3a (see 214). Thus, the binary string representing the complete quadtree structure would be illustrated in fig. 6th. The different background shading in this binary sequence in the figure representation. 6a correspond to different levels in the quadtree hierarchy based on subdivision. Shading 216 represents level 0 (corresponding to a block size equal to the original tree block size), shading 218 represents level 1 (corresponding to a block size equal to half the size of the original tree block), shading 220 represents level 2 (corresponding to a block size equal to one quarter of the original tree block size), and 222 represents level 3 shading (corresponding to a block size equal to one eighth of the block size of original tree). All subdivision flags of the same hierarchy level (which correspond to the same block size and the same color as in the exemplary binary sequence representation) can be encoded by entropy and using one or more of the same insertion probability model 18, for example. Note that for the case of a first amplitude run, the subdivision information would be transmitted in a different order, shown in fig. 6b. Similar to the subdivision of each tree block for prediction purposes, the division of each resulting prediction block into residual blocks has to be transmitted in the bitstream. Furthermore, there may be a maximum and minimum block size for the residual encoding which is transmitted as side information and which may change from picture to picture. Or the maximum and minimum value for the residual encoding block size can be fixed in encoder and decoder. At each node of the primary leaf of the quadtree, as shown in fig. 3c, the corresponding prediction block can be divided into residual blocks of the maximum allowable size. These blocks are the constituent root nodes of the subordinate quadtree structure for residual encoding. For example, if the maximum residual block size for the image is 64x64 and the prediction block is of size 32x32, then the entire prediction block would correspond to a subordinate root node of the (residual) size quadtree 32x32. On the other hand, if the maximum residual block size for the image is 16x16, then the 32x32 prediction block would consist of four residual quadtree root nodes, each of size 16x16. Within each prediction block, the signaling of the subordinate quadtree structure is done root node by root node in raster scan order (left to right, top to bottom). As in the case of the quadtree primary structure (prediction), for each node a flag is encoded, which specifies whether that particular node is divided into four descendant nodes. So, if this flag has a value of "1", this procedure is repeated recursively for all four corresponding descendant nodes and their corresponding sub-blocks in raster scan order (top left, top right, bottom left, bottom right corner ) until a leaf node of the subordinate quadtree is reached. As in the case of the primary quadtree, signaling to the lower level nodes of the subordinate quadtree hierarchy is not necessary, since the nodes correspond to the smallest possible size residual block blocks, which cannot yet be divided. For entropy encoding, residual block subdivision flags belonging to residual blocks of the same block size can be encoded using a single model of the same probability. Thus, according to the example presented above in relation to Figs. 3a to 6a, subdivider 28 has defined a main subdivision for prediction purposes and a subordinate subdivision of blocks of different sizes from the primary subdivision for residual encoding purposes. Data stream inserter 18 encoded the primary subdivision by signaling for each tree block in a zigzag digitizing order, a sequence of bits constructed according to fig. 6a, together with the primary maximum encoding block size and the maximum hierarchy level of the major subdivision. For each prediction block thus defined, the associated prediction parameters were included in the data stream. Furthermore, an encoding of similar information, i.e. maximum size, hierarchy level and maximum bit sequences according to fig. 6a, was performed for each size prediction block which was equal to or less than the maximum size for the residual subdivision and for each tree root residual block in which the prediction blocks pre-divided the size that exceeds the maximum size defined for the residual blocks. For each residual block thus defined, the residual data is inserted into the data stream. The extractor 102 extracts the respective bit sequences from the data stream at the input 116 and informs the dividers 104 about the information of the subdivision thus obtained. Furthermore, the stream data inserter 18 and the extractor 102 can use the above mentioned order between prediction blocks and residual blocks to transmit additional syntax elements such as residual data output by residual precoder 14 and output of the prediction of parameters by predictor 12. Using this order has advantages that the appropriate contexts to encode the individual syntax elements for a given block can be chosen by exploiting already encoded/decoded syntax elements from neighboring blocks. Furthermore, similarly, residual precoder 14 and predictor 12 as well as residual reconstructor 106 and precoder 110 can process individual prediction and residual blocks in the order described above. Fig. 7 shows a step flow diagram that can be performed by the extractor 102 in order to extract the subdivision information from the data stream 22 when encoded in the manner as described above. In a first step, the extractor 102 divides the image into blocks 24 of root trees 150. This step is indicated as step 300 in fig. 7. Step 300 may involve extractor 102 extracting the maximum prediction block size from data stream 22. Additionally or alternatively, step 300 may involve extractor 102 to extract the maximum level of hierarchy from the data stream. 22. Then, in a step 302, an extractor 102 decodes the flag or bit of the data stream. Step 302 is executed for the first time, the extractor 102 knows that the corresponding flag is the first flag of the bit stream belonging to the first tree root block 150 in the order of digitizing the tree root block 140. As this flag is a hierarchy level 0 flag, extractor 102 can use a context associated with hierarchy level 0 modeling in step 302 in order to determine a context. Each context can have a corresponding probability estimate for the entropy decoding flag associated with it. The probability estimate of contexts in individual context can be adapted to the respective symbol context statistics. For example, in order to determine an appropriate context to decode the hierarchy level 0 flag in step 302, the extractor 102 can select a context from a set of contexts, which is associated with that hierarchy level 0, depending on the level 0 of the flag hierarchy of neighboring tree blocks, or even, depending on the information contained within the bit strings that define the quadtree subdivision of neighboring tree blocks of the currently processed tree block, such as the upper and left neighbor tree block. In the next step, that is, step 304, the extractor 102 checks whether the newly decoded flag suggests a partitioning. If this is the case, the extractor 102 either partitions the current block - present as a tree block - or indicates this partitioning to the subdivider 104a in step 306 and checks, in step 308, whether the flow hierarchy level was equal to the level. of maximum hierarchy minus one. For example, extractor 102 may, for example, also have the maximum hierarchy level extracted from the data stream in step 300. If the current hierarchy level is unequal to the maximum hierarchy level minus one, extractor 102 increases the level of current hierarchy by 1 in step 310 and backtrack to step 302 to decode the next flag of the data stream. This time, the flags to be decoded in step 302 belong to another hierarchy level and therefore, according to an embodiment, the extractor 102 can select one of the different context sets, the set that belongs to the current hierarchy level . The selection can also be based on bit subdivision sequences according to fig. 6a of neighboring tree blocks having already been decoded. If a flag is decoded, and checking in step 304 reveals that this flag does not suggest a division of the current block, extractor 102 continues with step 312 to check whether the current hierarchy level is 0. If this is the case, the extractor 102 proceeds processing with respect to the next tree root block in the order of digitizing 140 in step 314, or stops processing the extraction of subdivision information if there is no tree root block to be processed to the left. It should be noted that the description in fig. 7 focuses on decoding the subdivision indication flags from the subdivision prediction only, so, in fact, step 314 may involve decoding additional boxes or syntax elements referring, for example, to the current tree block . In any case, if another or a next tree root block exists, the extractor 102 proceeds from step 314 to step 302 to decode the near flag from the subdivision information, i.e., the first flag of the sequence of the indicator against the new root block of the tree. If, in step 312, the hierarchy level turns out to be different from 0, the operation proceeds to step 316 with a check to see if other descendant nodes belonging to the current node exist. That is, when the extractor 102 performs the check at step 316, it has already been checked at step 312 that the current hierarchy level is a different hierarchy level than hierarchy level 0. This, in turn, means that there is a parent node, which belongs to a tree root block 150 or one of the smaller blocks 152a-d, or even smaller blocks 152a-d, and so on. The tree structure node to which the newly decoded flag belongs has a parent node, which is common to three additional nodes of the current tree structure. The order of digitization between descending nodes such as a common parent node has been illustrated in exemplary figure 3 for hierarchy level 0 with reference signal 200. Thus, in step 316, extractor 102 checks as to whether each of these four descendant nodes have already been visited in the process of fig. 7. If this is not the case, that is, if there are no additional descendant nodes with the current parent node, the process in fig. 7 proceeds with step 318, where the next descendant node according to a zigzag scan order 200 within the current hierarchy level is visited, so that its corresponding subblock now represents the ongoing process block 7 and, the thereafter, a flag is decoded in step 302 from the data stream with respect to the current block or the current node. If, however, there are no additional descendant nodes for the current parent node in step 316, the process of fig. 7 proceeds to step 320, where the current hierarchy level is decreased by 1, whereupon the process then proceeds with step 312. By carrying out the steps shown in fig. 7, extractor 102 and subdivider 104a cooperate to retrieve the selected side subdivision of the encoder from the data stream. The process in fig. 7 is concentrated on the case described above, the prediction of subdivision. Fig. 8 shows, in combination with the flow diagram of fig. 7 as the extractor 102 and the subdivider 104a cooperate to recover the residual subdivision of the data stream. In particular, fig. Fig. 8 shows the steps performed by the extractor 102 and the subdivider 104a, respectively, for each prediction block resulting from the prediction of the subdivision. These prediction blocks are traversed, as mentioned above, according to a zigzag analysis 140 the order among the 150 subdivision prediction tree blocks and using a transverse first order depth within each currently visited tree block 150 to traverse the leaf blocks as shown, for example, in fig. 3c. According to the drill-down order, the leaf blocks of partitioned primary tree blocks are visited in the drill-down order with the visit of sub-blocks of a given hierarchical level having a current common node in the scan order in zigzag 200 and mainly digitizing the subdivision of each of these subblocks before proceeding to the next subblock, in this order of digitizing in zigzag 200. For the example in fig. 3c, the resulting check order between the leaf nodes of the tree block 150 is shown with the reference signal 350. For a currently visited prediction block, the process in fig. 8 starts at step 400. At step 400, an internal parameter indicating the current size of the current block is set equal to the size of hierarchy level 0 of the residual subdivision, that is, the maximum block size of the residual subdivision. It should be remembered that the maximum residual block size can be smaller than the smallest subdivision prediction block size or it can be equal to or larger than the last. In other words, according to an embodiment, the encoder is free to choose any of the possibilities just mentioned. In the next step, that is, step 402, a check is performed whether the block size of the currently visited prediction block is larger than the parameter indicating the current internal size. If this is the case, the currently visited prediction block, which can be a subdivision prediction sheet block or a subdivision prediction tree block, which has not been further partitioned, is larger than the maximum size. of residual blocking and, in this case, the process of fig. 8 proceeds with step 300 of the figure. 7. That is, the currently visited prediction block is divided into residual tree root blocks and the first flag of the sequence of the first flag of the residual tree block within this currently visited prediction block is decoded in step 302, and so on. against. If, however, the currently visited prediction block has a size equal to or smaller than the internal parameter indicating the current size, the process of fig. 8 proceeds to step 404, where the size of the prediction block is checked to determine if it equals the internal parameter indicating the current size. If this is the case, the dividing step 300 can be omitted and the process proceeds directly with step 302 of FIG. 7. If, however, the size of the currently visited prediction block prediction block is smaller than the internal parameter indicating the current size, the process of fig. 8 proceeds to step 406, where the hierarchy level is increased by 1 and the current size is adjusted to the size of the new hierarchy level, as well as divided by 2 (in both directions of the axis in the case of quadtree subdivision). After that, the verification of step 404 is performed again. The effect of the circuit formed by steps 404 and 406 is that the hierarchy level always corresponds to the size of the corresponding blocks to be partitioned and regardless of the prediction of the respective block being less than or equal to / greater than the maximum block size residual. Thus, during decoding of the flags in step 302, the context of modeling performed depends on the hierarchy level and the size of the block to which the flag refers at the same time. The use of different contexts of flags of different levels of hierarchy or block sizes, respectively, is advantageous in that the probability estimate can fit well the actual probability distribution between occurrences of the flag value with, on the other hand, having a relatively moderate number of contexts to be managed, thereby reducing context management overhead, as well as increasing context adaptation to current symbol statistics. As noted above, there can be more than one set of samples and these arrays of samples can be grouped into one or more plane groups. The input signal to be encoded, indicating input 32, for example, may be an image of a video sequence or the still image. The image can thus be managed in the form of one or more sample arrays. In the context of coding an image of a video sequence or still image, sample arrangements can refer to the three color planes, such as red, green and blue plane luminance or chrominance, and as in representations of YUV or YCbCr color. In addition, sample arrays representing alpha, i.e., transparency, and/or 3-D depth information of video material may be present as well. A number of these sample arrays can be grouped together as a so-called plan group. For example, luminance (Y) can be a plane group with only one sample arrangement and chrominance, such as CbCr, it can be another plane group with two sample arrangements, or in another example YUV can be a group with three shot arrangements and one depth information for 3-D video material can be a different shot group with a single sample arrangement. For each plane group, a primary quadtree structure can be encoded within data stream 22 to represent the prediction block division and for each prediction block, a quadtree secondary structure represents the residual block division. So, according to a first example that we just mentioned, where the luminance component is a plane group, while the chrominance component forms the other plane group, there would be a quadtree structure for the luminance plane prediction blocks, a quadtree structure for the luminance residual blocks, the plane of a quadtree structure for the chrominance plane prediction block and a quadtree structure for the chrominance plane residual blocks. In the second example mentioned above, however, there would be a quadtree structure for the luminance and chrominance prediction blocks together (YUV), a quadtree structure for the residual luminance and chrominance blocks together (YUV), a quadtree structure for the prediction of block depth information for the 3-D video material and a quadtree structure for the residual blocks of depth information for the 3-D video material. Furthermore, in the above description, the input signal was divided into prediction blocks using a primary quadtree structure and it was described how these prediction blocks were further subdivided into residual blocks using a subordinate quadtree structure. According to an alternative embodiment, the subdivision cannot end at the subordinate quadtree stage. That is, blocks obtained from a division using the subordinate quadtree structure can be further subdivided with a tertiary quadtree structure. This division, in turn, can be used for the purpose of using coding new tools that can facilitate the coding of the residual signal. The above description has focused on the subdivision done by subdivider 28 and subdivisors 104a, respectively. As mentioned above, the subdivision defined by the subdivider 28 and 104a, respectively, can control the processing granularity of one of the aforementioned modules of the encoder 10 and the decoder 100. However, according to the embodiments described below, the subdividers 228 and 104a, respectively, are followed by a melt concentration 30 and 104b, respectively. It should be noted, however, that concentrations 30 and 104b are optional and can be left out. In effect, however, and as will be described in more detail below, the merger provides the coder with the opportunity to combine some of the prediction blocks or residual blocks into groups or aggregates, so that the other, or at least some of the others modules can handle these groups of blocks together. For example, indicator 12 may sacrifice small deviations between prediction parameters of some prediction blocks determined by optimization using subdivider subdivision 28 and use prediction parameters common to all these blocks instead of prediction, if the group signaling of the prediction blocks together with a common transmission parameter for all blocks belonging to this group are more promising in the sense of distortion rate/ratio than individually signaling the prediction parameters for all prediction blocks. Processing to retrieve the prediction of predictors 12 and 110 itself based on these common prediction parameters can, however, still occur block prediction. However, it is also possible that indicators 12 and 110 still carry out the prediction process at once for the entire group of prediction blocks. As will be described in more detail below, it is also possible that the group of prediction blocks is not only for the use of the same prediction parameters or common to a group of prediction blocks, but alternatively or, in addition, allows that the encoder 10 sends a prediction parameter for this group, together with the prediction residues for prediction blocks that belong to this group, so that the signaling overhead for signaling the prediction parameters for this group can be reduced. In the latter case, the merging process can only influence the insertion of the data stream 18, rather than the decisions made by the residual precoder 14 and the predictor 12. However, more details are presented below. For completeness, however, it should be noted that only the mentioned aspect also applies to other subdivisions, such as the residual subdivision or the subdivision filter mentioned above. Firstly, the merging of sample sets, such as above mentioned and the prediction of residual blocks, is motivated in a broader sense, that is, it is not limited to the above mentioned multi-tree subdivision. Subsequently, however, the description focuses on merging blocks resulting from the multi-tree subdivision for which embodiments have just been described above. Generally speaking, merging the syntax elements associated with specific sets of samples for the purpose of transmitting associated encoding parameters allows to reduce the information rate of image and video lateral encoding applications. For example, the sample arrays of the signal to be encoded are usually divided into particular sample sets or sample sets, which can represent rectangular or square blocks, or any other sample collection, including arbitrarily triangle-shaped regions, or other ways. In the above described embodiments, the simply linked regions were the prediction blocks and the residual blocks resulting from the multi-tree subdivision. The subdivision of sample arrays can be fixed by syntax or, as described above, the partial division can be at least partially signaled within the bit stream. To keep the lateral information rate for signaling subdivision information small, the syntax generally only allows a limited number of choices, resulting in simple separation such as subdivision of blocks to smaller blocks. Sample sets are associated with certain encoding parameters, which may specify prediction information or residual encoding modes, etc. Details on this subject are described above. For each sample set, individual encoding parameters such as for specifying prediction and/or encoding residuals can be transmitted. In order to obtain better coding efficiency, the merging aspect described below, i.e. merging two or more sets of samples into so-called groups of sample sets, allows for some advantages, which are described further below. For example, sets of samples can be combined in such a way that the entire fixed sample of a certain group of parts of the same encoding parameters, which can be transmitted together with a sample fits into the group. By doing this, the encoding parameters do not have to be passed for each sample set of a group of sample sets individually, but instead the encoding parameters are passed only once for the entire group of samples. of samples. As a result, the lateral information rate for transmitting the encoding parameters can be reduced and the overall encoding efficiency can be improved. As an alternative approach, further refinement of one or more encoding parameters can be passed to one or more sample sets from a group of sample sets. Refinement can be applied to all sets of samples in a group or just to the set of samples to which it is passed. The merging aspect described below also gives the encoder greater freedom in creating the bit stream 22, as the merging approach significantly increases the number of possibilities for selecting a partitioning for the sample arrays of an image. Since the encoder can choose from more options, such as minimizing the particular measurement/distortion rate, the encoding efficiency can be improved. There are several possibilities to operate an encoder. In a simple approach, the coder could first determine the best subdivision of the sample arrangements. In brief reference to fig. 1, subdivider 28 can determine the optimal subdivision in a first stage. Subsequently, it can be verified, for each set of samples, whether merging with another set of samples or another group of sets of samples reduces a particular rate/distortion cost measure. With this, the prediction parameters associated with a group resulting from the merging of sample sets can be re-estimated, such as by performing a new motion sweep or the prediction parameters that have been determined for the common sample set. and the candidate set of samples or group of sample sets for the fusion can be evaluated for the considered group of sample sets. In a more comprehensive approach, a measure of cost, in particular rate/distortion can be evaluated for additional candidate groups of sample sets. It should be noted that the fusion approach described below does not change the processing order of the sample sets. That is, the concept of fusion can be executed in a way that the delay is not increased, that is, each set of samples remains decodable at the same time without using the fusion approach. If, for example, the bit rate that is saved by reducing the number of encoded prediction parameters is greater than the bit rate that is to be additionally spent for encoding fusion information for indication of fusion for decoding laterally, the additional fusion approach to be described below results in increased coding efficiency. Furthermore, it should be mentioned that the syntax extension described for the merge provides the encoder with additional freedom in selecting the partitioning of an image or a plane group in blocks. In other words, the encoder is not restricted to performing the subdivision first and then checks whether any of the resulting blocks have the same or similar set of prediction parameters. As a simple alternative, the encoder can first determine the subdivision according to a cost rate distortion measure and then the encoder can check, for each block, whether a merge with one of its neighboring blocks or the associated group already determined blocks reduces a cost measure of the distortion rate. With this, the prediction parameters associated with the new block group can be re-estimated, such as by performing a remotion scan or the prediction parameters that have been determined for the current block and the neighboring block or block groups can be evaluated for the new block group. Merge information can be flagged on a block basis. Effectively, merging can also be interpreted as inference of the prediction parameters for a current block, in which the inferred prediction parameters are matched with the prediction parameters of one of the neighboring blocks. Alternatively, residues can be transmitted by blocks within a group of blocks. Thus, the fundamental idea behind the merging concept described below is to reduce the bit rate that is necessary for the transmission of prediction parameters or other encoding parameters by merging neighboring blocks into a group of blocks, where each group of blocks is associated with a single set of encoding parameters, such as prediction parameters or residual encoding parameters. The merge information is signaled within the bit stream in addition to the subdivision information, if present. The advantage of the merge concept is an increase in coding efficiency resulting from a decrease in the lateral information rate for the coding parameters. It should be noted that the fusion processes described here can also extend to dimensions other than spatial dimensions. For example, a group of sample sets or blocks, respectively, lying within several different video images, can be merged into a group of blocks. Fusion can also be applied to 4-D compression and coding field light. Thus, briefly returning to the previous description of figs. 1 to 8, it should be noted that the merging process subsequent to subdivision is advantageous regardless of the specific way that subdividers 28 and 104a, respectively, subdivide the images. To be more precise, the latter can also subdivide the images in a similar way to, for example, H.264, that is, by dividing each sub-image into a regular arrangement of rectangular or square macro blocks of a pre-size. determined, such as 16 x 16 luminance samples or a signaled size within the data stream, each macroblock with certain encoding parameters associated with it comprising, inter alia, the partitioning parameters defining, for each macroblock, a partition in a regular subnet of 1, 2, 4 or some other number of divisors that serve as a prediction granularity and the corresponding prediction parameters in the data stream, as well as to define the residual separation and the residual transformation granularity corresponding. In any case, merging provides the above mentioned advantages briefly discussed, such as the reduction of the lateral information bit rate in image and video encoding applications. A particular set of samples, which can represent rectangular or square blocks or arbitrarily shaped regions, or any other collection of samples, such as any simply linked region or samples are normally linked to a certain set of encoding parameters and for each. of the sample sets, encoding parameters are included in the bit stream, encoding parameters representing, for example, prediction parameters specifying how the corresponding sample set is predicted using already encoded samples. The partitioning of the arrays of samples of an image into sets of samples can be fixed by syntax or can be signaled by corresponding subdivision information within the bitstream. The encoding parameters for the sample set can be passed in a predefined order, which is given by syntax. According to the fusion functionality, fusion 30 is capable of signaling, by a common set of samples or a flow block, such as a block or a residual prediction block that is fused with one or more sets of samples, to a group of sample sets. The encoding parameters for a group of sample sets therefore need to be passed only once. In a particular embodiment, the encoding parameters of a stream sample set are not transmitted if the stream sample set is merged with a sample set or an existing group of sample sets for which the parameters of encoding have already been transmitted. Instead, the encoding parameters for the current set of samples are matched with the encoding parameters of the sample set or a group of sample sets with which the current set of samples is merged. As an alternative approach, further refinement of one or more encoding parameters can be transmitted by a sample set. Refinement can be applied to all sets of samples in a group or just to the set of samples to which it is passed. According to an embodiment, for each set of samples, such as a prediction block as mentioned above, a residual block as mentioned above, or a block of leaves of a multitree subdivision as mentioned above, the set of all previously encoded/decoded sample sets is called "causal sample sets". See, for example, fig. 3c. All blocks shown in this figure are the result of a certain subdivision, such as a prediction subdivision or a residual subdivision or any multitree subdivision, or similar, and the encoding/decoding order defined between these blocks are defined by arrow 350. Considering a given block among these blocks to be the current sample set or current region simply turned on, its set of causal sample sets is made up of all previous blocks of the current block over order 350. However, it should again be reminded that another subdivision not using multi-tree subdivision would be possible, as well as the following discussion of merging principles are reported. The sample sets that can be used for merging with a current sample set is called the "candidate sample set set" below and is always a subset of the "causal sample set set". How the subset is formed can be known to the decoder, or it can be specified within the data stream or bit stream from the encoder to the decoder. If a given set of stream samples is encoded/decoded and its set of candidate sample sets is not empty, it is signaled within the data stream in the encoder or derived from the data stream in the decoder if the sample set is merged with a set of samples out of the set of candidate sample sets and, if so, to one of them. Otherwise, union cannot be used for this block, as the set of candidate sample sets is empty anyway. There are different ways to determine how the subset of the set of causal sample sets constitutes the set of candidate sample sets. For example, candidate sample sets determinations can be based on a sample within the current sample set, which is uniquely geometrically defined, such as the upper left image sample of a rectangular or square block. From this unique geometrically defined sample, a non-zero number, in particular of samples, is determined, which directly represent spatial neighbors of this unique geometrically defined sample. For example, in particular, the non-zero number of samples comprises the upper neighbor and the left neighbor of the geometrically defined unique sample of the current sample set, so that the non-zero number of neighboring samples can be at most two , one if one of the top and left neighbors is unavailable or out of the picture, or zero if both neighbors are absent. The set of candidate sample sets can then be determined to encompass those sample sets which contain at least a non-zero number of the just mentioned neighboring samples. See, for example, fig. 9th The current sample of the set currently under consideration as being the fusion object must be block X and its geometrically uniquely defined sample must exemplarily be the upper left sample indicated in 400. The upper and left neighboring samples of sample 400 are indicated in 402 and 404. The set of causal sample sets or set of causal blocks is shaded. Among these blocks, blocks A and B form one of the neighboring samples 402 and 404 and therefore these blocks form either the candidate block set or the candidate sample set set. According to another embodiment, the set of candidate sample sets determined for the sake of fusion may additionally or exclusively include sets of samples which contain a particular non-zero number of samples, which may be one or two, which have the same spatial location but are contained in a different image, namely, for example, a previously encoded/decoded image. For example, in addition to blocks A and B in fig. 9a, a block of a previously encoded image can be used, which comprises the sample in the same position as the sample 400. By the way, it should be noted that only the upper part of the neighboring sample 404 or merely the left neighboring sample 402 could be used to set the non-zero number of neighboring samples mentioned above. In general, the set of candidate sample sets can be derived from data previously processed within the current image or in other images. The derivation can include spatial directional information, such as transform coefficients associated with a given direction and image gradients of the current image, or it can include temporal directional information, such as representations of neighboring motion. From such available data to the receiver/decoder and other side data and information within the data stream, if present, the set of candidate sample sets can be derived. It should be noted that the derivation of candidate sample sets is performed in parallel by both fusers 30 on the encoder side and fusers 104b on the decoder side. As already mentioned, it can either determine the set of candidate sample sets independently of each other based on a predefined way known to both, or the encoder can signal hints within the bit stream, which bring the fuser 104b in condition to perform deriving these sets of candidate samples in the same way as the fusion shape 30 on the side of the encoder determining the set of candidate sample sets. As will be described in more detail below, merging data stream 30 and inserting data stream 18 cooperate in order to transmit one or more syntax elements for each set of samples, which specify whether the set of samples is merged with another set of samples, which in turn may be part of an already merged group of sample sets, and that of the set of sample sets is used for merge candidates. Extractor 102, in turn, extracts these syntax elements and informs fuser 104b accordingly. In particular, according to the specific embodiment described below, one or two syntax elements are transmitted to specify the fusion information for a given set of samples. The first syntax element specifies whether the current set of samples is merged with another set of samples. The second syntax element is passed only if the first syntax element specifies that the current set of samples is merged with another set of samples, specifying which set of candidate sample sets is used for the merge. The passing of the first syntax element can be suppressed if a set of candidate sample sets derivatives is empty. In other words, the first syntax element can only be passed if a set of candidate sample sets derivatives is not empty. The second syntax element can only be passed if a set of candidate derived sample sets contains more than one set of samples, since if only one sample set is contained in the set of candidate sample sets, another selection does not it's possible anyway. In addition, the transmission of the second syntax element can be suppressed if the set of candidate sample sets comprises more than one set of samples, but if all sets of samples in the set of candidate sample sets are associated with the same parameter of coding. In other words, the second syntax element can only be passed if at least two sets of samples from a set of candidate derived sample sets are associated with different encoding parameters. Within the bit stream, the fusion information for a set of samples may be encoded before prediction parameters or other specific encoding parameters that are associated with that set of samples. The prediction or encoding parameters can only be transmitted if the merge object information signals that the current set of samples should not be merged with any other set of samples. The fusion information for a given set of samples, i.e. a block, for example, can be encoded after an appropriate subset of the prediction parameters, or, in a broader sense, the encoding of parameters that are associated with the set. of the respective sample has been transmitted. The prediction / encoding parameters subset may consist of one or more image reference indices or one or more components of a motion parameter vector or a reference index and one or more components of a motion parameter vector , etc. The prediction subset or encoding parameters already transmitted can be used to obtain a candidate sample set by establishing a larger provisional set of candidate sample sets, which may have been derived as described above. As an example, a difference or distance measure according to a predetermined distance measure, between the already coded prediction and the coding parameters of the current set of samples and corresponding prediction or the coding parameters of the preliminary set of sets of candidate samples can be calculated. So, only those sample sets for which the calculated difference measure, or distance, is less than or equal to a predefined or derived threshold, and are included at the end, ie a limited set of candidate sample sets . See, for example, fig. 9th The current set of samples is block X. A subset of the encoding parameters that belong to this block will have already been inserted into data stream 22. Imagine, for example, block X was a prediction block, in which case the Appropriate subset of the encoding parameters may be a subset of the prediction parameters for this block X, such as a subset of a set comprising an image reference index and mapping and motion information, such as a motion vector. If block X was a residual block, the subset of encoding parameters is a subset of the residual information, such as transformation coefficients, or a map that indicates the positions of significant transformation coefficients in block X. Based on this information, both the insert data stream 18 and the extractor 102 are capable of using this information to determine a subset of blocks A and B which forms, in this specific embodiment, the previously mentioned preliminary set of candidate sample sets. In particular, since blocks A and B belong to the set of causal sample sets, their encoding parameters are available to the encoder and decoder at the time when the encoding parameters of blocks X are encoded/decoded. Therefore, the above comparison using the difference measure can be used to exclude any number of blocks from the initial set of candidate sample sets A and B. The resulting reduced set of candidate sample sets can then be used as described above, namely, in order to determine as to whether a fusion indicator indicating fusion should be transmitted or should be extracted from the data stream, depending on the number of sample sets in the reduced and candidate sample set as to whether a second syntax element must be passed, or must be extracted from the data stream with a syntax element indicating that the second example defined within the reduced set of candidate sample sets must be the partner block for the merge. The above mentioned threshold against which the above mentioned distances are compared can be fixed and known by the encoder and decoder, or can be derived based on the calculated distances, such as the average of the difference values, or some other central or similar trend. . In this case, the reduced set of candidate sample sets would inevitably proceed to an appropriate subset of the preliminary set of candidate sample sets. Alternatively, only the sample sets are selected from the preliminary set of candidate sample sets for which the distance according to the measured distance is minimized. Alternatively, exactly one set of samples is selected from the initial set of samples using the aforementioned distance measurement set candidate. In the latter case, the merge information only needs to specify whether the current set of samples should be merged with a single candidate from the sample set or not. Thus, the set of candidate blocks may be formed or derived as described below with reference to fig. 9th Starting from the position of the upper left sample 400 of the current block X in fig. 9a, its left neighbor sample position 402 and its neighbor upper sample position 404 are derived on their encoder and decoder sides. The set of candidate blocks can therefore have only up to two elements, viz. the blocks of the set of causal shaded blocks in Fig. 9a which contain one of the two sample positions, which in the case of fig. 9a are blocks B and A. Thus, the set of candidate blocks can have only two blocks directly neighboring the upper left sample position of the current block as its elements. According to another embodiment, the set of candidate blocks can be given by all blocks that were coded before the current block and contain one or more samples that represent the spatial direct neighbors of any sample of the current block. The spatial direct neighborhood can be constrained to target directly left neighbors and/or directly superior neighbors and/or directly right neighbors and/or directly inferior neighbors of a sample of the current block. See, for example, fig. 9b showing another subdivision of the block. In this case, the candidate blocks comprise four blocks, called blocks A, B, C and D. Alternatively, the set of candidate blocks, additionally or exclusively, may include blocks that contain one or more samples that are located in the same position as any of the samples in the current block, but are contained in a different form, ie image already encoded / decoded. Even alternatively, the candidate set of blocks represents a subset of the above-described sets of blocks, which were determined by the neighborhood in spatial or temporal direction. The subset of candidate blocks can be fixed, flagged or derived. The derivation of the subset of candidate blocks can consider the decisions made by other blocks in the image or in other images. By way of example, blocks that are associated with the same or very similar encoding parameters that block other candidates cannot be included in the set of candidate blocks. The following description of an embodiment applies for the case where only two blocks containing the left and upper neighbor example of the upper left sample of the current block are considered as maximum potential candidates. If the set of candidate blocks is not empty, a flag named Merge_flag is flagged, specifying whether the current block is merged with any of the candidate blocks. If the Merge_flag equals 0 (for "false"), this block is not merged with one of its candidate blocks and all encoding parameters are passed normally. If the Merge_flag equals 1 (for "true"), the following applies. If the set of candidate blocks contains a single block, the candidate block is used for merging. Otherwise, the set of candidate blocks contains exactly two blocks. If the prediction parameters of these two blocks are identical, these prediction parameters are used for the current block. Otherwise (the two blocks have different prediction parameters), a flag called mergejeftjlag is signaled. If merge_left_flag is equal to 1 (for "true"), the block with the position of the sample left neighbor of the position of the sample upper left of the current block is selected from the set of candidate blocks. If mergeJeftJlag equals 0 (for "false"), the other (ie upper neighborhood) lock from the set of candidate blocks is selected. Prediction parameters from the selected block are used for the current block. In summary, for some of the embodiments described above in relation to melting, reference is made to fig. 10 showing the steps performed by the extractor 102 to extract the fusion information from the input data stream 22 of input 116. The process starts at 450 with identifying candidate blocks or sample sets for a current sample set or blocks. It should be remembered that the encoding parameters for the blocks are transmitted within the data stream 22, in an order of a given size, and therefore fig. 10 refers to the process of retrieving the interleaving information for a set of samples or blocks currently visited. As mentioned above, the identification and step 450 may include the identification between previously decoded blocks, i.e., the random set of blocks, based on aspects of the neighborhood. For example, neighboring blocks can be named candidates, which include certain neighborhood samples that are neighbors to one or more geometrically predetermined samples of the current block X in space or time. In addition, the identification step may comprise two phases, named, a first phase involving an identification as just mentioned, that is, based on the neighborhood, which leads to a preliminary set of candidate blocks, and a second phase according to the which only candidate blocks are named those already transmitted encoding parameters that satisfy a given relationship with the appropriate subset of one of the current block X encoding parameters, which has already been decoded from the data stream before step 450. Next, the process steps to step 452, where it is determined whether the number of candidate blocks is greater than zero. If this is the case, a Merge_flag is extracted from the data stream at step 454. Extraction step 454 may involve decoding entropy. The context for decoding the Merge_flag entropy in step 454 can be determined based on syntax elements belonging to, for example, the candidate block set or the preliminary candidate block set, where the dependency on the syntax elements can be restricted with the information whether the blocks belonging to the set of interest have been merged or not. The probability estimate of the selected context can be adapted. If, however, the number of candidate blocks is determined in turn to be zero 452, the process of Fig. 10 proceeds with step 456 in which the encoding parameters of the current block are extracted from the bit stream , or, in the case of the aforementioned two-stage identification alternative, the remaining encoding parameters thereof, whereupon extractors 102 proceed with processing the next block of the block scan order, such as order 350 shown in fig. 3c. Returning to step 454, the process continues, after extraction at step 454, with step 458 with a check on whether the extracted Merge_flag suggests the occurrence or absence of a merge of the current block. If there is no fusion, the process which proceeds with step 456 mentioned above will take place. Otherwise, the process proceeds with step 460, including checking that the number of candidate blocks equals one. If this is the case, the transmission of an indication of a determined candidate block between the candidate blocks was not necessary and, therefore, the process of fig. 10 proceeds with step 462, whereby the fusion partner of the flow block is set to be the only candidate block, which then in step 464 the coding parameters of the fusion partner block are used for adaptation or prediction. of the encoding parameters or the remaining encoding parameters of the current block. In case of adaptation, encoding parameters missing from the current block are merely copied from the fusion partner block. In the other case, that is, in the case of prediction, step 464 may include a further extraction of residual data from the data stream of the residual data relating to the residual prediction of the missing encoding parameters of the current block and a combination of this residual data with the prediction of these missing encoding parameters obtained from the fusion partner block. If, however, the number of candidate blocks is determined to be greater than one in step 460, the process of fig. 10 proceeds to step 466, where a check is performed to see whether the encoding parameters or the interesting part of the encoding parameters - named subpart related to the part not yet transferred within the data stream for the current block - are identical each other. If so, these common encoding parameters are defined as reference interleaving or candidate blocks are defined as fusion partners in step 468 and the respective encoding parameters are of interest used for adaptation or prediction in step 464. It should be noted that the fusion partner alone may have been an obstacle for which the fusion was signaled. In this case, the adopted or predictively obtained parameters encoding fusion partners are used in step 464. Otherwise, however, i.e. in the case where the encoding parameters are not identical, the process of fig. 10 proceeds to step 470, where an additional syntax element is extracted from the data stream, namely this merge_left_flag. A separate set of contexts can be used to entropy decode this flag. The set of contexts used to entropy decode the merge_left_flag can also comprise only one context. After step 470, the candidate block indicated by merge_left_flag is defined to be the merge partner in step 472 and used for adaptation or prediction in step 464. After step 464, extractor 102 proceeds with manipulating the block in the next order of block. Of course there are many alternatives. For example, a combined syntax element can be passed within the data stream, instead of separating the syntax elements and Merge_flag merge_left_flag described earlier, the combined syntax elements signal the merging process. Furthermore, the above mentioned merge_left_flag can be transmitted within the data stream, regardless of whether the two candidate blocks have the same prediction parameters or not, thus reducing the computational load for carrying out the process in fig. 10. As already indicated in relation to, for example, fig. 9b, more than two blocks can be included in the set of candidate blocks. In addition, merging information, i.e. signaling information whether a block is merged and, if so, that the block is a candidate to be merged can be signaled by one or more syntax elements. A syntax element can specify whether the block is merged with any of the candidate blocks, such as the Merge_flag described above. The flag can only be passed if the set of candidate blocks is not empty. A second syntax element may signal that one of the candidate blocks is employed for merging such as the aforementioned merge_left_flag, but in general indicates a selection between two or more than two candidate blocks. The second syntax element can be passed only if the first syntax element signals that the current block is to be merged with one of the candidate blocks. The second syntax element can still be transmitted only if the set of candidate blocks contains more than one candidate block and/or if any one of the candidate blocks has different prediction parameters than any other among the candidate blocks. The syntax can be depending on how many candidate blocks are presented and/or how different prediction parameters are associated with the candidate blocks. The syntax for signaling which of the blocks among the candidate blocks to be used can be defined simultaneously and/or in parallel, on the encoder and decoder sides. For example, if there are three options for candidate blocks identified in step 450, the syntax is chosen such that only these three options are available and are considered for entropy encoding, for example, in step 470. In other words, the syntax element is chosen such that its symbolic alphabet has simply as many elements as candidate block choices exist. The probabilities for all other choices can be assumed to be zero and entropy encoding/decoding can be adjusted simultaneously in the encoder and decoder. Furthermore, as already mentioned in relation to step 464, the prediction parameters that are inferred as a consequence of the merging process may represent the complete set of prediction parameters that are associated with the current block or may represent a subset of these. prediction parameters, such as the prediction parameters for one hypothesis of a block for which multiple prediction hypotheses are used. As noted above, syntax elements referring to fusion information can be entropy encoded using context models. Syntax elements can consist of the Merge_flag and merge_left_flag described above (or similar syntax elements). In a concrete example, one in three context models or contexts can be used for encoding/decoding the Merge_flag in step 454, for example. The index used in the merge_flag_ctx context model can be derived as follows: if the set of candidate blocks contains two elements, the value of merge_flag_ctx is equal to the sum of the Merge_flag values of the two candidate blocks. If the set of candidate blocks contains one element, however, the merge_flag_ctx value can be equal to twice the Merge_flag value of this one candidate block. As each of the neighboring candidate Merge_flag blocks can be either one or zero, all three contexts are available for the Merge_flag. merge_left_flag can be coded using only a single probabilistic model. However, according to an alternative embodiment, different context models can be used. For example, non-binary syntax elements can be mapped to a sequence of binary symbols, called compartments. Context templates for some syntax elements or syntax element compartments that define the merge information can be obtained based on already transmitted syntax elements from neighboring blocks or the number of candidate blocks or other measures, as other elements syntax or compartments of syntax elements can be encoded with a fixed context template. Regarding the above description of block merging, it is verified that the set of candidate blocks can also be derived in the same way as for any of the modalities described above, with the following modification: Candidate blocks are restricted to blocks with prediction with compensation of movement or interprediction, respectively. Only these can be elements of a set of candidate blocks. The signaling and shaping of the merge information context can be performed as described above. Returning to the combination of the embodiments described above the multitree subdivision and the merge aspect described now, if the image is divided into square blocks of variable size using a quadtree subdivision structure based, for example, on the Merge_flag and merge_left_flag or other syntax elements specifying the merge can be interleaved with prediction parameters that are passed to each leaf node of the quadtree structure. Consider again, for example, fig. 9th Fig. 9 shows an example of a quadtree subdivision based on an image into variable size prediction blocks. The first two largest sized blocks are called tree blocks, that is, they are the maximum possible size prediction blocks. The other blocks in this figure are obtained as a subdivision of their corresponding tree block. The current block is marked with an "X". All shaded blocks are co/decoded before the current block, and form the set of causal blocks. As explained in the description of the derivation of the set of candidate blocks for one of the embodiments, only blocks that contain the samples directly neighboring (i.e., upper or left) of the position of the upper left sample of the current block can be members of the set of candidate blocks. Thus, the current block can be merged with any "A" block or "B" block. If the Merge_flag equals 0 (for "false11), the current block "X" is not merged with either of the two blocks. If blocks "A" and "B" have identical prediction parameters, no distinction needs to be made, as merging with either of the two blocks will yield the same result. So, in this case, the merge_left_flag is not passed. Otherwise, if "A" and "B" blocks have different prediction parameters, merge_left_flag equals 1 (for "true") will merge "X" and "B" blocks, while merge_left_flag equals 0 (for " false") will merge the "X" and "A" blocks. In another preferred embodiment, additional neighboring blocks (already transmitted) represent candidates for merging. In fig. 9b, another example is shown. Here, the flow of block "X", and the left neighbor block "B" are tree blocks, ie they have the maximum allowable block size. The size of the upper neighbor block "A" is one-fourth the size of the tree block. The blocks that are the element of the set of causal blocks are shaded. Note that according to a preferred embodiment, the current "X" block can only be merged with the two "A" or "B" blocks, and not with any of the other main neighboring blocks. In the other preferred embodiment, other neighboring blocks (already transmitted) represent candidates for merging. Before proceeding with the description regarding the aspect of how to deal with the different sample arrays of an image, according to the embodiments of the present patent application, it should be noted that the above discussion on multitree subdivision and signaling by on the one hand and the fusion aspect on the other hand it is clear that these aspects provide advantages that can be exploited independently of each other. That is, as explained above, the combination of a multitree subdivision with merging has specific advantages, but these advantages also result from alternatives that, for example, the merge feature is incorporated with, however, the subdivision made by subdivisors 30 and 104 are not based on a quadtree or multitree subdivision, but correspond to a subdivision of the macroblock with the regular partitioning of these macroblocks into smaller parts. On the other hand, in turn, the combination of multitree subdivision together with the transmission of the maximum tree block size within the bit stream, and the use of multitree subdivision together with the use of depth transverse order carry the encoding of parameters of the corresponding blocks is advantageous irrespective of the fusion feature to be used concomitantly or not. In general, the advantages of merging can be understood when considering that, intuitively, coding efficiency can be increased when the syntax of sample arrangement codes is extended to not only allow subdivision of a block, but also to merge two or more blocks that are obtained after subdivision. As a result, you get a group of blocks that are coded with the same prediction parameters. Prediction parameters for a group of blocks have to be coded once. Furthermore, in relation to merging sample sets, it should again be noted that the considered sample sets can be rectangular or square blocks, in which case the merged sample sets represent a collection of rectangular and/or square blocks. Alternatively, however, the sample sets considered are arbitrarily image-shaped regions and the interleaved sample sets representing a set of image-shaped regions arbitrarily. The following description focuses on handling different sample arrays of an image in case there is more than one sample per image array and in some aspects described in the following sub-description are advantageous regardless of the type of subdivision used. is, regardless of whether subdivision is based on multitree subdivision or not, and regardless of whether merge is being used or not. Before starting with the description of specific modalities relating to processing different sample arrays of an image, the main problem of these modalities is motivated by a brief introduction to the field of handling different sample arrays per image. The following discussion focuses on coding parameters between blocks of different sample arrays of a picture in picture, or a video coding application, and, in particular, a way to adaptively predict coding parameters between arrays of different samples of a picture, for example, but not exclusively the encoder and decoder of Figs. 1 and 2, respectively, or another video image or encoding environment. Sample arrays can, as noted above, represent sample arrays that are related to different color components or sample arrays that are associated with an image with additional information, such as transparency data or depth maps. Sample arrays that are related to the color components of an image are also referred to as color planes. The technique described below is also referred to as interplane/prediction adoption and can be used in block-based picture and video encoders and decoders, where the processing order of sample array blocks for an image can be arbitrary. Image and video encoders are usually designed for encoding color images (either still images or images from a video sequence). A color image consists of multiple color planes, which represent sample arrangements for different color components. Color images are often encoded as a set of sample arrays consisting of a luminance plane and two chrominance planes, the latter of which specify color difference components. In some application domains, it is also common for the coded swatch array set to consist of three color planes that represent the swatch arrays for the three primary colors: red, green, and blue. Also, for improved color representation, a color image can consist of more than three color planes. In addition, an image can be associated with auxiliary sample arrays that specify additional information for the image. For example, these auxiliary swatch arrangements can be swatch arrangements that specify transparency (suitable for specific display purposes) for the color arrangements associated with the swatch or swatch arrangements that specify a depth map (suitable for rendering multiple views, eg for 3-D shows). In conventional image and video coding standards (such as H.264), color planes are generally encoded together, where certain coding parameters such as prediction macroblock and sub-macroblock modes, reference indices, and motion vectors are used for all color components of a block. The luminance plane can be considered as the primary color plane for which specific encoding parameters are specified in the bit stream, and the chrominance planes can be regarded as secondary planes, for which the corresponding encoding parameters are inferred from the main luminance plane. Each luminance block is associated with two chrominance blocks that represent the same area of an image. Depending on the chrominance sampling format used, the chrominance sample arrays can be smaller than the luminance array sample for a block. For each macroblock, which consists of a luminance and two chrominance components, the same partitioning into smaller blocks is used (if the macroblock is subdivided). For each block composed of a block of luminance samples and two blocks of chrominance samples (which can be the macroblock itself or a subblock of the macroblock), the same set of prediction parameters, such as the reference indices, motion parameters, and intra-prediction modes are sometimes employed. In the specific profiles of conventional video encoding standards (such as the 4:04:04 profiles in H.264), it is also possible to encode the different color planes of an independent image. In this configuration, macroblock paring, prediction modes, reference indices, and motion parameters can be separately chosen for a color component of a macroblock or subblock. Conventional coding standards of all color planes are coded together using the same set of specific coding parameters (such as subdivision information and prediction parameters) or all color planes are coded completely independently of each other . If the color planes are coded together, a set of subdivision prediction parameters must be used for all color components of a block. This ensures that the side information is kept small, but can result in a reduction in coding efficiency compared to independent coding, as the use of different block decompositions and different color component prediction parameters can result in lower cost of distortion rate. As an example, using a different motion vector or reference image for the chrominance components can significantly reduce the residual signal energy for the chrominance components and increase their overall coding efficiency. If color planes are coded independently, coding parameters such as paring block, reference indices, and motion parameters can be selected for each color component separately in order to optimize coding efficiency for each color component. But it is not possible to employ redundancy between the color components. Multiple transmissions of certain encoding parameters result in an increase in the lateral information rate (compared to combined encoding) and this rate of increase in lateral information can negatively impact the overall encoding efficiency. Furthermore, support for auxiliary sample arrays in prior art video encoding standards (such as H.264) is restricted to the case where auxiliary sample arrays are encoded using their own set of encoding parameters. Thus, in all the embodiments described so far, the image planes can be solved as described above, but also as discussed above, the overall coding efficiency for encoding multiple sample arrays (which can be related to the planes of different colors and/or auxiliary array samples) can be increased, when it would be possible to decide on a block basis, for example, whether all sample arrays for a block are encoded with the same coding parameters or if different coding parameters encoding are used. The basic idea of the following inter-plane prediction is to allow an adaptive decision based on blocks, for example. The encoder can choose, for example, based on the distortion rate criterion, whether all or some of the sample sets of a given block are encoded using the same encoding parameters or whether different encoding parameters are used for different sample arrays . This selection can also be achieved by signaling for a given block of a sample arrangement if specific encoding parameters are inferred from an already co-located encoded block of a different sample arrangement. It is also possible to arrange the different sample arrangements to obtain an image into groups, which are also referred to as sample arrangement groups or plane groups. Each plane group can contain one or more arrays of samples of an image. Then, blocks of sample arrays within a plane group share the same selected encoding parameters, such as subdivision information, prediction modes, and residual encoding modes, while other encoding parameters such as levels of transformation coefficients are transmitted separately for each of the sample sets within the plane group. A plane group is coded as a primary plane group, that is, none of the coding parameters are inferred or predicted from other plane groups. For each block of a sub-plane group, it can be adaptively chosen whether a new set of selected encoding parameters is transmitted, or if the selected encoding parameters are inferred or predicted from the primary plane group or another group of secondary plan. Decisions about whether some encoding parameters for a given block are inferred or predictable are included in the bit stream. Interplane prediction allows for greater freedom in choosing the trade-off between lateral information rate and prediction quality compared to prior art encoding of images consisting of multiple-sample arrays. The advantage is greater coding efficiency compared to coding conventional images that consist of arrays of multiple samples. Intra-plane adoption/prediction can extend an image or video encoder, such as those in the previous embodiments, so that it can be adaptively chosen for a block of a color sample array, or an array of auxiliary swatch or from a set of color swatch arrays and/or auxiliary swatch arrays if a selected set of encoding parameters is inferred or predicted from the already encoded co-located blocks of other sample arrays in the same image, or whether the selected set of encoding parameters for the block is independently encoded without reference to the co-located blocks of arrays of 5 other samples of the same picture. Decisions whether the selected set of encoding parameters is inferred or predicted for a block of a sample array or a block of multiple-sample sets can be included in the bitstream. The different sample arrangements that are associated with an image need not be the same size. As described above, the sample arrays that are associated with an image (the sample sets can represent the color components and/or auxiliary sample arrays) can be arranged in two or more so-called plane groups, where each plan group consists of one or more arrays of samples. Sample arrays that are contained in a particular plane group do not need to be the same size. Note that this arrangement in a plane group includes the case where each sample arrangement is coded separately. To be more precise, according to an embodiment, it is adaptively chosen, for each block of a plane group, whether the encoding parameters specifying how a block is predicted are inferred or predicted from a block 20 co-located already encoded from a different plane group for the same picture or if these encoding parameters are encoded separately for the block. The encoding parameters that specify how a block is predicted include one or more of the following encoding parameters: block prediction modes specifying which prediction is used for the block (intra prediction, inter prediction using single motion vector, and reference image, inter-prediction using two motion vectors and reference images, inter-prediction using a higher order, ie non-translational motion model and a single reference image, inter-prediction using non-translational motion models. multiple motion and reference pictures), intraprediction modes specify how an intraprediction signal is generated, an identifier that specifies how several prediction signals are combined to generate the final prediction signal for the block, the reference indices specifying which reference image(s) is/are employed for with motion compensation, motion prediction parameters (such as displacement vectors) location or related motion parameters) specifying how the prediction signal(s) is/are generated using the reference image(s), an identifier that specifies how the reference image(s) is/are filtered for the generation of Prediction compensated motion signals. Note that, in general, a block can only be associated with a subset of the mentioned encoding parameters. For example, if the block prediction mode specifies that a block is intrapredicted, the coding parameters for a block may additionally include intraprediction modes, but coding parameters such as reference indices and motion parameters. that specify how an interprediction signal is generated are not specified, or, if the block prediction mode specifies interprediction, the associated encoding parameters may additionally include the reference indices and motion parameters, but the modes of intra prediction are not specified. One of the two or more plane groups can be encoded or indicated within the bit stream as the primary plane group. For all blocks of that main plane group, the encoding parameters that specify how the prediction signal is generated are transmitted, without referring to other plane groups of the same picture. The remaining plan groups are coded as secondary plan groups. For each block of one of the sub-plane groups, one or more syntax elements are passed that signal whether the encoding parameters for specifying how the block is predicted are inferred or predicted from a co-located block of other-plane groups or whether a new set of these encoding parameters is transmitted to the block. One of one or more syntax elements may be referred to as an inter-plane prediction flag or inter-plane prediction parameter. If the syntax elements signal that the corresponding encoding parameters are not inferred or predicted, a new set of encoding parameters for the corresponding blocks are transmitted in the bit stream. If syntax elements signal that corresponding encoding parameters are inferred or predicted, the block co-located in a so-called reference plane group is determined. The assignment of the reference plane group to the block can be configured in different ways. In one embodiment, a particular reference plane group is assigned to each second plane group; this assignment can be fixed or can be flagged in high-level syntax structures, such as parameter sets, access unit header, image header, or section header. In a second embodiment, the reference plane group assignment is coded within the bit stream and signaled by one or more syntax elements that are coded for a block, in order to specify whether the selected coding parameters are inferred or provided or separately coded. In order to facilitate the recently mentioned possibilities in connection with inter-plane prediction and the following detailed embodiments, reference is made to fig. 11, which shows an illustrative image 500 composed of three arrays of samples 502, 504, and 506. For easier understanding, simple sub-portions of sample arrays 502-506 are shown in Fig. 11. The sample arrays are shown as if the data were plotted against each other in space, so that the 502-506 sample sets overlap each other along a 508 direction and so that the projection of the samples from the 502-506 sample arrays along the 508 direction results in the samples from all of these 502-506 sample arrays to be correctly spatially located to one another. In still other words, planes 502 and 506 were spread out along the horizontal and vertical direction in order to adapt their spatial resolution to each other and register them to each other. According to one embodiment, all sample arrays of an image belong to the same portion of a scene where the spatial resolution along the vertical and horizontal direction may differ between the individual sample arrays 502-506. Furthermore, for purposes of illustration, sample arrays 502 and 504 are considered to belong to one plane group 510, while sample array 506 is considered to belong to another plane group 512. In addition, FIG. Figure 11 illustrates the exemplary case where the spatial resolution along the horizontal axis of the sample array 504 is twice the resolution in the horizontal direction of the sample array 502. In addition, the sample array 504 is considered to form the array. primary relative to sample arrangement 502, which forms an arrangement relative to subordinate primary arrangement 504. As explained earlier, in this case, the subdivision of sample arrangement 504 into blocks as decided by subdivider 30 of FIG. 1 is adopted whereby the subordinate arrangement 502 according to the example in fig. 11, due to the vertical resolution of the arrangement of sample 502 with half the resolution in the vertical direction of the main arrangement 504, each block was halved into two horizontally juxtaposed blocks, which, due to the halving are square blocks, when measured in units of the sample positions within the sample array 502. As exemplarily represented in fig. 11, the subdivision chosen for the sample array 506 is different from the subdivision of the other plane group 510. As described above, the subdivider 30 can select the subdivision of the pixel array 506 separately or independently of the subdivision for the plane group 510. Of course, the sample resolution of array 506 may also differ from the resolutions of planes 502 and 504 of plane group 510. Now, when encoding the individual sample arrays 502-506, the encoder 10 can begin encoding the main array 504 of the plane group 510, for example, in the manner described above. The blocks shown in fig. 11 may, for example, be the aforementioned prediction blocks. Alternatively, blocks are residual blocks or other blocks that define the granularity for defining certain encoding parameters. Interplane prediction is not restricted to quadtree or multitree subdivision, although this is illustrated in fig. 11. Upon transmission of the syntax element to the primary arrangement 504, the encoder 10 may decide to declare the primary arrangement 504 to be the reference plane for the subordinate plane 502. The encoder 10 and an extractor 30, respectively, may signal this decision through of the bit stream 22, while the association may be evident from the fact that the sample arrangement 504 makes the arrangement of the main plane group 510 in which the information, in turn, may also be part of the bit stream 22. In any case, for each sample block 502 within the insert array 18 or any other encoding module 10, together with the inserter 18 may decide to suppress the transfer of encoding parameters of this block within the bit stream and signal within the bit stream for that block rather than the encoding parameters of a block co-located within the main array 504 that should be used instead, or the parameters encoding parameters of the block co-located within the main arrangement 504 which is to be used as a prediction for the encoding parameters of the current sample arrangement lock 502 by simply transferring the residual data thereof to the current block of the sample arrangement 502 inside the bit stream. In case of a negative decision, the encoding parameters are transferred within the data stream as usual. The decision is signaled within data stream 22 for each block. On the side of the decoder, the extractor 102 uses this interplane prediction information for each block in order to obtain the encoding parameters of the respective block of the sample arrangement 502 accordingly, namely, by inference of the block encoding parameters co-located of the main array 504 or alternatively extracting residual data for that block from the data stream and combining that residual data with a prediction obtained from the encoding parameters of the co-located block of the primary array 504, if the interplane pass/prediction information suggests the interplane adoption/prediction, or extract the encoding parameters of the current block from the sample arrangement 502, as usual independent of the main arrangement 504. As also previously described, reference planes are not restricted to residing within the same plane group as the block plane for which the inter-prediction is currently of interest. Thus, as described above, the group plane 510 may represent the primary plane group or reference plane group for the secondary plane group 512. In this case, the bit stream may contain a syntax element indicating for each block of 506 array samples, as the adoption/prediction of above-mentioned coding parameters of co-located macroblocks of either planes 502 and 504 of primary plane group or reference plane group 510 shall be performed or not, in whereas in the latter case, the encoding parameters of the current block of sample arrangement 506 are transmitted normally. It should be noted that the subdivision and/or prediction parameters for the planes within a plane group can be the same, that is, because they are only coded once by a plane group (all the child planes of a group of plan infers the information of the subdivision and/or the prediction parameters from the main plan within the same plan group), and the adaptive prediction or infers the information of the subdivision and/or the prediction parameters that is made between the groups of plan. It should be noted that the reference plane group can be a primary plane group or a secondary plane group. Co-location between blocks of different planes within a plane group is easily understandable in that the sub-division of the primary sample arrangement 504 is spatially adapted by the subordinate sample arrangement 502, except for the just-described sub-partioning of blocks, in order to turn the adopted leaf blocks into square blocks. In case of inter-plane adoption/prediction between different plane groups, co-location can be defined in a way, so as to allow greater freedom between the subdivisions of these plane groups. Given the reference plane group, the block co-located within the reference plane group is determined. Derivation of the co-located block and the reference plane group can be done by a process similar to the following. The particular sample 514 in the current block 516 of a sample of arrays 506 from the subplane group 512 is selected. The same can be done for the upper left sample of the current block 516, as shown at 514 in fig. 11 for illustrative purposes, or, from a sample in the next current block 516 to the middle of the current block 516 or any other sample within the current block, which is uniquely defined geometrically. The locations of this selected sample within an array 515 of samples 502 and 504 of the reference plane are calculated by group 510. The positions of sample 514 within sample arrays 502 and 504 are indicated in FIG. 11 in 518 and 520, respectively. Among the planes 502 and 504 within the reference plane group 510 is actually used and can be determined, or can be assigned within the bitstream. The sample within the corresponding sample arrangement 502 or 504 of the reference plane group 510, being closest to positions 518 and 520, respectively, is determined and the block containing that sample is chosen as the co-located block within the respective sample arrangement 502 and 504, respectively. In the case of fig. 11, these are blocks 522 and 524, respectively. An alternative approach for determining the co-located block in other planes is described later. In one embodiment, the encoding parameters specifying the prediction for the current block 516 are completely inferred using the corresponding prediction parameters of the co-located block 522/524 in a different plane from the group 510 of the same picture 500, without transmitting the additional side information . The inference may consist of a simple copy of the respective encoding parameters or an adaptation of the encoding parameters taking into account the differences between the current 512 and the group 510 reference plane. As an example, this adaptation may consist of adding a motion correction parameter (eg a displacement correction vector) to account for the phase difference between the luminance and chrominance sample arrays, or the adaptation may consist of modifying the accuracy of the motion parameters (eg , modifying the precision of the displacement vectors) to account for the different resolution of luminance and chrominance sample arrays. In another embodiment, one or more of the coding parameters inferred to specify the generation of the prediction signal are not directly used for the current block 516, but are used as a prediction for the respective coding parameters for the current block 516 and a refinement of these encoding parameters for the current block 516 is transmitted in bit stream 22. As an example, inferred motion parameters are not directly used, but motion parameter differences (such as a displacement vector difference) which specify the deviation between the motion parameters that are used for the current block 516 and the inferred motion parameters are encoded in the bit stream; on the side of the decoder, the actual used motion parameters are obtained by combining the inferred motion parameters and the differences of the transmitted* motion parameters. In another embodiment, the subdivision of a block such as the aforementioned prediction subdivision tree blocks into prediction blocks (i.e. blocks of samples for which the same set of prediction parameters is used) is adaptively inferred or provided from an already encoded co-located block of a different plane group for the same picture, i.e. the bit sequence according to fig. 6a or 6b. In one embodiment, one of the two or more plane groups is coded as a main plane group. For all blocks of that main plane group, the subdivision information is transmitted, without referring to other plane groups of the same image. The remaining plan groups are coded as secondary plan groups. For blocks of sub-plane groups, one or more syntax elements are transmitted such that the signal of the subdivision information is inferred or predicted from other co-located blocks of plane groups or if the subdivision information is transmitted in the bit stream. One of the one or more syntax elements may be referred to as an inter-plane prediction flag or inter-plane prediction parameter. If the syntax elements signal that the subdivision information is not inferred or predicted, the information for the block subdivision is transmitted in the bit stream without referring to other plane groups of the same picture. If syntax elements signal that subdivision information is inferred or predicted, the block co-located in a so-called reference plane group is determined. The assignment of the reference plane group to the block can be configured in different ways. In one embodiment, a particular reference plane group is assigned to each secondary plane group; this assignment can be fixed or can be flagged in high-level syntax structures such as parameter sets, access unit header, image header, or section header. In a second embodiment, the reference plane group assignment is coded within the bit stream and signaled by one or more syntax elements that are coded for a block, in order to specify whether the subdivision information is inferred or predicted or separately encoded. The reference plane group can be the primary plane group or another secondary plane group. Given the reference plane group, the block co-located within the reference plane group is determined. The co-located block is the block in the reference plane group that corresponds to the same image area as the current block, or the block that represents the block within the reference plane group, which shares most of the image area with the block acts. The co-located block can be divided into smaller prediction blocks. In another embodiment, the subdivision information for the current block, such as the quadtree based subdivision according to Figs. 6a or 6b, it is completely inferred based on the subdivision information of the block co-located in a different plane of the group of the same picture, without transmitting the additional lateral information. As a particular example, if the co-located block is divided into two or four prediction blocks, the current block is also divided into two or four subblocks, for prediction purposes. As another particular example, if the co-located block is divided into four subblocks and one of these subblocks is further divided into four smaller subblocks, the current block is also divided into four subblocks and one of these subblocks -blocks (which correspond to the subblock of the co-located block, which is still decomposing) is also divided into four smaller subblocks. In an even more preferred embodiment, the inferred subdivision information is not directly used for the current block, but is used as a prediction for the effective subdivision information for the current block, and the corresponding refinement information is transmitted in the stream. bits. As an example, the subdivision information that is inferred from the co-located block can be improved. For each subblock that corresponds to a subblock of the co-located block, which is not divided into smaller blocks, a syntax element can be encoded in the bit stream, which specifies whether the subblock is further decomposed into a group. of current plan. The transmission of such a syntax element can be conditioned on the size of the subblock. Or it can be a bitstream signal where a sub-block that is further divided into the reference plane group is not divided into smaller blocks of the current plane group. In another embodiment, both the subdivision of a block into prediction blocks and the encoding parameters that specify how subblocks are predicted are adaptively inferred or predicted from an already co-located coded block of a different plane group for the same image. In a preferred embodiment of the invention, one of the two or more plane groups is encoded as a primary plane group. For all blocks of this primary plane group, the subdivision information and prediction parameters are transmitted, without referring to other plane groups of the same image. The remaining plan groups are coded as secondary plan groups. For blocks of secondary plane groups, one or more syntax elements that are transmitted signal whether the subdivision information and prediction parameters are inferred or predicted from a co-located block of other plane groups or whether the information of the subdivision and the prediction parameters are transmitted in the bit stream. One of the one or more syntax elements may be referred to as the interplane prediction flag or interplane prediction parameter. If the syntax elements signal that the subdivision information and prediction parameters are not inferred or predicted, the subdivision information for the block, and the prediction parameters for the resulting subblocks are transmitted in the bit stream without referring to other plane groups of the same image. If the syntax elements signal that the subdivision information and prediction parameters for the subblock are inferred or predicted, the block co-located in a so-called reference plane group is determined. The assignment of the reference plane group to the block can be configured in different ways. In one embodiment, a particular reference plane group is assigned to each secondary plane group; this assignment can be fixed or can be flagged in high-level syntax structures, such as parameter sets, access unit header, image header, or section header. In a second embodiment, the reference plane group assignment is coded within the bit stream and signaled by one or more syntax elements that are coded for a block, in order to specify whether the subdivision information and the parameters of prediction are inferred or predicted or separately coded. The reference plane group can be the primary plane group or another secondary plane group. Given the reference plane group, the block co-located within the reference plane group is determined. The co-located block can be the block in the reference plane group that corresponds to the same image area as the flow block, or the block that represents the block within the reference plane group, which shares most of the flow area. image with the current block. The co-located block can be divided into smaller prediction blocks. In a preferred embodiment, the subdivision information for the current block as well as the prediction parameters for the resulting subblocks are completely inferred based on the subdivision information of the block co-located in a different plane group of the same picture and the prediction parameters of the corresponding sub-blocks, without transmitting additional side information. As a particular example, if the co-located block is divided into two or four prediction blocks, the current block is also divided into two or four subblocks for prediction purpose and prediction parameters for the subblocks from the current block are derived, as described above. As another particular example, if the co-located block is divided into four subblocks and one of these subblocks is further divided into four smaller subblocks, the current block is also divided into four subblocks and one of these subblocks -blocks (which correspond to the subblocks of the co-located block, which is still decomposing) is also divided into four smaller subblocks and the prediction parameters for all non-partitioned subblocks are inferred as described above. In an even more preferred embodiment, the information is completely sub-division inferred based on the sub-division information of the block co-located in the reference plane group, but the inferred prediction parameters for the sub-blocks are used only as prediction for the sub-blocks. real prediction parameters of subblocks. Deviations between the actual prediction parameters and the inferred prediction parameters are encoded in the bitstream. In another embodiment, the inferred subdivision information is used as a prediction for the actual subdivision information of the block stream and that the difference is transmitted in the bit stream (as described above), but the prediction parameters are completely inferred. . In another embodiment, both the inferred subdivision information and the inferred prediction parameters are used as the prediction and the differences between the actual subdivision information and the prediction parameters and their inferred values are transmitted in the bitstream. In another embodiment, it is adaptively chosen, for a block of a plane group, whether residual encoding modes (such as transformation type) are inferred from an already encoded or predicted co-located block of a different plane group for the same picture or if the residual encoding modes are encoded separately for the block. This embodiment is similar to the embodiment for adaptive inference / prediction of the prediction parameters described above. In another embodiment, the subdivision of a block (e.g., a prediction block) into transformation blocks (i.e., sample blocks to which the two-dimensional transformation is applied) is adaptively inferred or predicted from a coded block already co-located from a different plane group to the same image. This embodiment is similar to the embodiment for adaptive inference / prediction of subdivision into prediction blocks described above. In another embodiment, subdivision of a block to transform the blocks and residual encoding modes (e.g. transform types) to the resulting transform blocks are adaptively inferred or predicted from an already co-located encoded block of a different plane group for the same image. This embodiment is similar to the embodiment for adaptive inference / prediction of subdivision into prediction blocks and prediction parameters for the resulting prediction blocks described above. In another embodiment, the subdivision of a block into prediction blocks, the associated prediction parameters, the subdivision information of the prediction blocks, and the coding modes for the transformation block residues are adaptively inferred or predicted from a coded block already co-located from a different plane group for the same picture. This embodiment represents a combination of the embodiments described above. It is also possible that only some of the inferred or predicted encoding parameters are mentioned. Thus, interplane adoption/forecasting can increase coding efficiency; described above. However, the coding efficiency gain through the adoption/prediction plan is also available in case of block subdivisions other to be used than multitree based allotments and independent of the merge block to be implemented or not. The embodiments outlined above in relation to the adaptation / interprediction plane are applicable to the video image and the encoders and decoders that divide the color planes of an image and, if present, the auxiliary sample arrangements associated with an image into blocks and associated with these blocks of encoding parameters. For each block, a set of encoding parameters can be included in the bitstream. For example, these encoding parameters can be parameters describing how a block is predicted or decoded on the side of the decoder. As particular examples, coding parameters can represent macroblock modes or prediction blocks, information subdivision, intraprediction modes, reference indices used for prediction of compensated motion parameters, motion such as displacement vectors, encoding modes residuals, transformation coefficients, etc. The different sample arrangements that are associated with an image can have different sizes. Next, an enhanced signaling system of encoding parameters within a tree-based partitioning scheme, such as those described above with reference to fig. 1-8 is described. As with the other systems, namely the merger and international adoption plan/prediction, the effects and advantages of advanced signaling systems, hereinafter often called inheritance, are described independently of the above embodiments, although the schemes described below are combined with any of the above embodiments, either alone or in combination. Generally, the encoding scheme for better encoding side information within a tree-based partitioning scheme, called inheritance, described below allows for the following advantages over conventional encoding parameter handling systems. In conventional image and video encoding, images or particular sets of sample arrangements for the images are generally decomposed into blocks, which are associated with certain encoding parameters. Images usually consist of arrays of multiple samples. In addition, an image can also be associated with other auxiliary sample arrays, which can, for example, specify transparency information or depth maps. The sample arrays of an image (including auxiliary sample arrays) can be grouped into one or more so-called plane groups, where each plane group consists of one or * more sample arrays. The plane groups of an image can be coded independently, or, if the image is associated with more than one plane group, with prediction of other plane groups of the same image. Each plane group is usually decomposed into blocks. The blocks (or the corresponding sample array blocks), are predicted by an inter-picture prediction or intra-picture prediction. Blocks can be of different sizes and can be square or rectangular. The partitioning of an image into blocks can be fixed by syntax, or it can be (at least partially) signaled within the bitstream. Syntax elements are often passed signaling subdivision into blocks of predefined sizes. Such syntax elements can specify whether and how a block is subdivided into smaller blocks and are associated with encoding parameters, eg for prediction purposes. For all samples of a block (or the corresponding blocks of sample arrays) the decoding of the associated encoding parameters is specified in a certain way. In the example, all samples in a block are predicted using the same set of prediction parameters, such as reference indices (identifying a reference image in the already coded image set), motion parameters (specifying a measurement for the movement of a block between a reference image and the current image), parameters for specifying filter interpolation, intraprediction modes, etc. Motion parameters can be represented by displacement vectors with a horizontal and vertical component, or by means of higher order motion parameters, such as the related motion parameters consisting of six components. It is also possible that more than one set of particular prediction parameters (such as reference indices and motion parameters) are associated with a single block. In this case, for each set of these determined prediction parameters, a unique intermediate prediction signal for the block (or the corresponding blocks of sample arrays) is generated, and the final prediction signal is constructed by a combination including the overlapping signals. of intermediate prediction. The corresponding weight parameters and possibly also a displacement constant (which is added to the weight sum) can be fixed to an image, or a reference image, or a set of reference images, or they can be included in the parameter set prediction for the corresponding block. The difference between the original blocks (or the corresponding sample array blocks) and their prediction signals, also referred to as the residual signal, is generally transformed and quantized. Often a two-dimensional transformation is applied to the residual signal (or the corresponding sample arrays of the residual block). To transform encoding, blocks (or corresponding sample array blocks), for which a certain set of prediction parameters has been used, can be further divided, before applying the transform. Transformation blocks can be equal to or smaller than the blocks that are used for the prediction. It is also possible for a transformation block to include more than one of the blocks that are used for the prediction. Different transform blocks can have different sizes, and transform blocks can represent square or rectangular blocks. After transformation, the resulting transformation coefficients are quantified and so-called transformation coefficient levels are obtained. The levels of transformation coefficients, as well as the prediction parameters, and, if present, the subdivision information is encoded by entropy. In some image and video encoding standards, the possibilities for subdividing an image (or a plane group) into blocks that are provided by the syntax are very limited. Normally, it can only be specified if, and (potentially how) a block of a predefined size can be subdivided into smaller blocks. As an example, the size of the biggest block of H.264 is 16x16. The 16x16 blocks are also referred to as macroblocks and each image is divided into macroblocks in a first step. For each 16x16 macroblock, it can be signaled whether it is encoded as a 16x16 block, or as two 16x8 blocks, or as two 8x16 blocks, or as four 8x8 blocks. If a 16x16 block is subdivided into four 8x8 blocks, each of these 8x8 blocks can be coded as one 8x8 block, or as two 8x4 blocks, or as two 4x8 blocks, or as four 4x4 blocks. The reduced set of possibilities for specifying block partitioning in prior art image and video coding standards has the advantage that the lateral information rate for signaling the subdivision information can be kept small, but it has the disadvantage of that the bit rate needed to transmit the prediction parameters to the blocks can become significant, as explained below. The lateral information rate for signaling the prediction information does not generally represent a significant amount of the total bit rate of a block. And coding efficiency can be increased when said side information is reduced, which, for example, can be achieved through larger block sizes. Real images or images of a video sequence consist of arbitrarily shaped objects with specific properties. As an example, objects or parts of objects are characterized by a unique texture or a unique movement. And usually the same set of prediction parameters can be applied to such an object, or part of an object. But object boundaries generally do not match possible block boundaries for large prediction blocks (eg 16x16 macroblocks in H.264). An encoder generally determines the subdivision (among the limited set of possibilities) that results at the very least from a particular cost measure of distortion rate. For arbitrary shaped objects, this can result in a large number of small blocks. And since each of these small blocks is associated with a set of prediction parameters, it needs to be transmitted, the lateral information rate can become a significant part of the total bit rate. But since many of the small blocks still represent areas of the same object or part of an object, the prediction parameters for a given number of blocks obtained are the same or very similar. Intuitively, coding efficiency can be increased when the syntax is extended so that it not only allows for subdivision of a block, but also to share the coding parameters between blocks that are obtained after the subdivision. In a tree-based subdivision, sharing encoding parameters for a given set of blocks can be achieved by assigning encoding parameters or parts of one or more parent nodes in the tree-based hierarchy. As a result, shared parameters, or parts thereof, can be used to reduce the side information that is needed to signal the actual choice of encoding parameters for blocks obtained after subdivision. Reduction can be achieved by omitting parameter flagging for subsequent blocks or using common parameter(s) for prediction and/or modeling the context of parameters for subsequent blocks. The basic idea of the inheritance scheme described below is to reduce the bit rate that is required for transmitting the encoding parameters and sharing the information along the block-based tree hierarchy. The shared information is signaled within the bit stream (in addition to the subdivision information). The advantage of the inheritance scheme is an increase in encoding efficiency resulting from a decrease in the lateral information rate for the encoding parameters. In order to reduce the lateral information rate, according to the embodiments described below, the respective encoding parameters for a particular set of samples, i.e. simply linked regions, which may represent rectangular or square blocks or formed regions arbitrarily or any other collection of samples, from a multitree subdivision are signaled in the data stream efficiently. The inheritance scheme described below allows that encoding parameters need not be explicitly included in the bit stream for each of these sets of samples in full. Encoding parameters can represent prediction parameters, which specify how the corresponding set of samples is predicted using already encoded samples. Many possibilities and examples have been described above and also apply here. As also indicated above, and will be described later, as far as the following inheritance scheme is concerned, the tree-based partition of an image's sample sets into sample sets can be syntax-fixed or it can be denoted by corresponding subdivision information within the bit stream. The encoding parameters for the sample sets can, as described above, be passed in a predefined order, which is given by the syntax. According to the inheritance scheme, the decoder or extractor 102 of the decoder is configured to derive the information about the encoding parameters of the simply linked individual region or sets of samples in a specific way. In particular, encoding parameters, or parts thereof, such as parameters serving the prediction purpose, are shared between the blocks along the given tree-based partitioning scheme with the sharing group along the tree structure. to be decided by encoder or inserter 18, respectively. In a particular embodiment, sharing the encoding parameters for all descending nodes of a given internal node of the partitioning tree is indicated by means of a specific share flag with binary value. As an alternative approach, encoding parameter refinements can be passed to each node in such a way that accumulated parameter refinements along the tree-based hierarchy of blocks can be applied to all sample sets from the block to a leaf node provided. In another embodiment, the parts of the encoding parameters that are transmitted to internal nodes along the block tree-based hierarchy can be used for adaptive context entropy encoding and decoding of the encoding parameter or block parts to a leaf node provided. Fig. 12a and 12b illustrate the basic idea of inheritance for the specific case of using quadtree-based partitioning. However, as indicated above, often other multitree subdivision systems can be used, thus the tree structure is shown in Fig. 12a while the corresponding spatial partitioning that corresponds to the tree structure of fig. 12a is shown in Fig. 12b. The partitioning shown there is similar to that shown with respect to Figs. 3a to 3c. Generally speaking, the inheritance scheme will allow lateral information to be assigned to nodes located in different non-leaf layers within the tree structure. Depending on the assignment of lateral information to nodes in different layers of the tree, such as the internal nodes of the tree in fig. 12a or the root node thereof, different degrees of shared lateral information can be achieved within the tree block hierarchy shown in fig. 12b. For example, if it is decided that all the nodes of the sheets of layer 4, which in the case of fig. 12a all have the same parent nodes, they must share the lateral information, virtually this means that the smaller blocks, in fig. 12b indicated as 156a 156d share this side information and it is no longer necessary to transmit the side information through all these small blocks 156A to 156d in full ie four times, although this is kept as an option for the encoder. However, it would also be possible to decide that an entire region of a hierarchy level (layer 2) of fig. 12-A, named the fourth part in the upper right corner of the tree block 150, including subblocks 154a, 154b and 154d as well as the even smaller subblocks 156a to 156d just mentioned, serves as a region in which parameters of encoding are shared. Thus, the shared lateral information area is increased. The next level of growth would be the sum of all layer 1 subblocks, ie subblocks 152a, 152c and 152d and the smaller blocks mentioned above. In other words, in that case the entire tree block would have the side information assigned to it, with all sub-blocks of this tree block 150 sharing the side information. In the following description of heredity, the following notation is used to describe the embodiments: a. Reconstructed samples of current leaf node: r b. Reconstructed samples from neighboring leaves: r' c. Current leaf node predictor: p d. Residual of the current leaf node: Res e. Reconstructed residual of current leaf node: RecRes f. Scale and inverse transformation: SIT g. Share flag: f As a first example of heredity, intrapredictive signaling at internal nodes can be described. To be more exact, it is described how to signal the intra-prediction modes of the inner nodes of a block's tree based on partitioning for the purpose of prediction. By traversing the tree from the root node to leaf nodes, the inner nodes (including the root node) can transmit pieces of lateral information that will be explored by their corresponding descendant nodes. To be more specific, a share flag f is passed to the internal nodes with the following meaning: • If / has a value of 1 ("true"), all descendant nodes of a given internal node share the intra-mode. prediction of it. In addition to the share flag / with a value of 1, the inner node also signals the intraprediction mode parameter to be used for all descendant nodes. Consequently, all subsequent descendants do not have any prediction mode information, as well as all sharing flags. For the reconstruction of all related leaf nodes, the decoder applies the intraprediction mode of the corresponding inner node. • If / has a value of 0 ("false"), the descendant nodes of the corresponding internal node do not share the same intraprediction mode and each child node that is an internal node carries a separate share flag. Fig. 12c illustrates intra-prediction signaling at internal nodes as described above. The inner layer 1 node transmits the sharing flag and the side information which is given by the intra-prediction mode information and the descending nodes do not carry any side information. As a second example of heredity, the refinement between prediction can be described. To be more precise, it is described how to signal side information from interprediction modes to internal modes of a tree-based block partitioning for the purpose of refining motion parameters, such as data by motion vectors. By traversing the tree from root node to leaf nodes, internal nodes (including the root node) can transmit pieces of lateral information that will be refined by their corresponding descendant nodes. To be more specific, a share flag is passed to the internal nodes with the following meaning: • If / has a value of 1 ("true"), all descending nodes of the given internal node share the reference of the same motion vector . In addition to the sharing flag with a value of 1, the inner node also signals the motion vector and the reference index. Consequently, all subsequent descendant nodes no longer carry the share flags, but can perform a refinement of this inherited motion vector reference. For the reconstruction of all related leaf nodes, the decoder adds the motion vector refinement at the given leaf node to the inherited motion vector reference belonging to the corresponding inner parent node that has a share flag with a value of 1. This means that the motion vector in the refinement of a given leaf node is the difference between the current motion vector to be applied with prediction motion compensation for this leaf node and the reference motion vector of the corresponding internal parent node. . • If f has a value of 0 ("false"), the descendant nodes of the corresponding internal node do not necessarily share the inter-prediction mode and even without refinement of the motion parameters is performed on the descendant node, using the motion parameters a from the corresponding internal node, and each descendant node that is an internal node carries a separate share flag. Fig. 12d illustrates the motion parameter refinement as described above. The inner node at layer 1 transmits the sharing flag and side information. Descendant nodes that are leaf nodes carry only the motion parameter refinements and, for example, the inner descendant node in layer 2 carries no * side information. Reference is now made to fig. 13. Fig. 13 shows a flow diagram illustrating the mode of operation of a decoder such as the decoder of FIG. 2 reconstructing a set of samples representing an exemplary spatial information signal, which is subdivided into different sized leaf regions by multi-tree subdivision, from a data stream. As described above, each of the leaf regions has associated with it a hierarchy level of a sequence of hierarchy levels of the multi-tree subdivision. For example, all blocks shown in Fig. 12b are leaf regions. Leaf region 156c, for example, is associated with hierarchy layer 4 (or level 3). Each leaf region has coding parameters associated with it. Examples of these encoding parameters were described above. The encoding parameters are, for each region of the sheet, represented by a set of respective syntax elements. Each syntax element is of a respective syntax element type from a set of syntax element types. Such a type of syntax element is, for example, a prediction mode, a motion vector component, an indication of an intraprediction mode, or the like. According to fig. 13, the decoder performs the following steps. In step 550, an inheritance information is extracted from the data stream. In the case of fig. 2, the extractor 102 is responsible for step 550. The inheritance information indicates whether or not inheritance is used for the current information sample arrangement. The following description will reveal that there are several possibilities for inheritance information, such as, among other things, the sharing flag f and the signaling of a multitree structure divided into a primary and a secondary part. The sampled array of information may already be a sub-part of an image, such as a tree block, termed the tree block 150 of fig. 12b, for example. Thus, inheritance information indicates whether or not inheritance is used for the specific tree block 150. Such inheritance information can be inserted into the data flow for all tree blocks in the prediction subdivision, for example. Furthermore, the inheritance information indicates whether inheritance is indicated to use at least one inheritance region of the information samples array, which is composed of a set of leaf regions and corresponds to a hierarchy level of the sequence of hierarchy levels of the multi-tree subdivision, being smaller than each of the hierarchical levels with which the set of leaf regions is associated. In other words, the inheritance information indicates whether or not inheritance will be used for the sample arrangement, such as tree block 150. If yes, it denotes at least one inheritance region or subregion of this tree block 150, with which parts of coding parameter sheet regions. Thus, the inheritance region cannot be a leaf region. In the example in fig. 12b, this inheritance region may, for example, be the region formed by subblocks 156a to 156b. Alternatively, the inheritance region may be larger and may also additionally comprise subblocks 154a, bed, and even alternatively the inheritance region may be tree block 150 itself with all leaf blocks of it sharing the parameters of encoding associated with that region of inheritance. It should be noted, however, that more than one inheritance region can be defined within a sample arrangement or tree block 150, respectively. Imagine, for example, the sub-block in the lower left corner 152c has also been split into smaller blocks. In this case, sub-block 152c may also form an inheritance region. In step 552, the inheritance information is checked as to whether the inheritance is used or not. If so, the process in fig. 13 continues with step 554, where an inheritance subset including at least one syntax element of a predetermined type of syntax element is extracted from the data stream by inter-inheritance region. In the next step 556, this inheritance subset is then copied to, or used as a prediction for, a corresponding inheritance subset of syntax elements within the set of syntax elements representing the encoding parameters associated with the set of leaf regions. where at least one region is constituted by inheritance. In other words, for each inheritance region indicated in the inheritance information, the data stream comprises a subset of inheritance syntax elements. In still other words, inheritance belongs to at least a certain syntax element type or syntax element category that is available for inheritance. For example, the prediction mode or inter-prediction mode or intra-prediction mode of the syntax element can be subjected to inheritance. For example, the inheritance subset contained within the data stream for the region may comprise a sequence of the syntax element's inter-prediction mode. The inheritance subset can also comprise other syntax elements of the syntax element types, which depends on the aforementioned type value of the fixed syntax element associated with the inheritance scheme. For example, in case the interprediction mode is a fixed component of the inheritance subset, the syntax elements defining the motion compensation, such as the motion vector components, may or may not be included in the inheritance subset by the syntax. . Imagine, for example, the upper right quarter of the tree block 150, ie sub-block 152b, was the inheritance region, then the inter-prediction mode could only be indicated for this inheritance region or the mode of inter-prediction along with motion vectors and motion vector indices. All syntax elements contained in the inheritance subset is copied or used as a prediction for the respective leaf block encoding parameters within that leaf inheritance region, i.e. blocks 154a, b, d and 156a to 156d. In the case of the prediction to be used, the residues are transmitted to the individual leaf blocks. One possibility for transmitting inheritance information to the tree block 150 is the aforementioned transmission of a share flag f. The extraction of inheritance information in step 550 can, in this case, have the following composition. In particular, the decoder can be configured to extract and check, for non-leaf regions corresponding to any one of a sequence to define at least one hierarchy level of the multi-tree subdivision, using a hierarchy level order from from the lowest level of hierarchy to the highest level of hierarchy, the data flow's share flag f, to know whether the respective flag or share flag inheritance prescribes inheritance or not. For example, the hierarchy level inheritance set can be formed by hierarchy layers 1-3 in fig. 12th Thus for any of the nodes of the subtree structure not being a leaf node and located within any of layers 1-3 it can have a share flag associated with it in the data stream. The decoder extracts these sharing flags in order from layer 1 to layer 3, as in a first-order search for depth and width. As soon as one of the share flags is equal to 1, the decoder knows that the leaf blocks contained in a proportion corresponding to the inheritance region of the inheritance subset are subsequently extracted in step 554. For descendant nodes of the current node, a check of the flags of inheritance is no longer required. In other words, the inheritance flags for these descendant nodes were not passed in the data stream as it is evident that the area of these nodes already belongs to the inheritance region within which the subset of inheritance syntax elements is shared. The f-sharing flags can be interleaved with the above mentioned signaling bits in quadtree subdivision. For example, an interleaved bit stream including both the subdivision flags as well as the sharing flags may be: 10001101 (0000) 000, which is the same subdivision information as illustrated in fig. 6a with the two interleaved share flags which are highlighted as underlined to indicate that in fig. 3c all sub-blocks in the lower left corner of tree block 150 share encoding parameters. Another way to define the inheritance information indicating the inheritance region would be to use two subdivisions defined in a way dependent on each other, such as explained above in relation to prediction and residual subdivision, respectively. Generally speaking, the leaf blocks of the primary subdivision can form the inheritance region to define the regions in which the inheritance subsets of syntax elements are shared, while the subordinate subdivision defines the blocks within these inheritance regions for which the subset of inheritance syntax elements is copied or used as a prediction. Consider, for example, the residual tree as an extension of the prediction tree. Also, consider the case where prediction blocks can be further divided into smaller blocks for the purpose of residual coding. For each prediction block that corresponds to a leaf node of the related quadtree prediction, the subdivision corresponding to residual encoding is determined by one or more subordinate quadtree(s). In this case, instead of using any prediction signaling in internal nodes, we consider the residual tree to be interpreted in a way that also specifies a refinement of the prediction tree, in the sense of using a constant prediction mode (signaled by the leaf node of the corresponding tree related to the prediction), but with the refined reference samples. The following example illustrates this case. For example, figs. 14a and 14b show a quadtree partitioning for intraprediction with neighboring reference samples being highlighted for a specific leaf node of the primary subdivision, while Fig. 14b shows the residual quadtree subdivision for the same leaf node prediction with refined reference samples . All sub-blocks shown in Fig. 14b have the same interprediction parameters contained within the data stream for the respective block of sheets highlighted in fig. 14th. Thus, fig. 14a shows an example for conventional quadtree pricing for intra-prediction, where reference samples for a specific leaf node are described. In our preferred embodiment, however, a partitioned intraprediction signal is calculated for each leaf node in the residual tree using already reconstructed neighboring samples of leaf nodes in the residual tree, for example, as indicated by the shaded gray stripes in fig. . 4(b). Then, the reconstructed signal of a certain residual leaf node is obtained in the usual way by adding the residual quantized signal to this prediction signal. This reconstructed signal is then used as a reference signal for the next prediction process. Note that the decoding order for the prediction is the same as the residual decoding order. In the decoding process, as shown in Figure 15, for each residual leaf node, the prediction signal p is calculated according to the current intra-prediction mode (as indicated by the prediction-related quadtree leaf node) using the samples of reference. After the process, Re c Re 5 = SIT (Re 5) 0 reconstructed signal is calculated and stored for the next calculation prediction process: r = Re c Re s + p The decoding order for the prediction is the same as the residual decoding order, which is illustrated in Figure 16. Each residual leaf node is decoded as described in the previous paragraph. The reconstructed signal is stored in a buffer memory, as shown in Figure 16. Out of this memory, the reference samples will be taken to the next prediction and decoding process. After having described specific embodiments in relation to Figs. 1-16 with distinct combined subsets of the aspects outlined above, additional embodiments of the present application are described which focus on certain aspects already described above, but which represent generalized embodiments of some of the embodiments described above. In particular, the embodiments described above with regard to the structure of Figs. 1 and 2, mainly combined many of the aspects of the present application, which would also be advantageous when used in other applications or other coding fields. As often mentioned during the above discussion, multitree subdivision, for example, can be used without merging and/or without adoption/forecasting plan and/or without inheritance. For example, the transmission of the maximum block size, the use of the depth pass order, the context adaptation depending on the hierarchy level of the respective subdivision pavilion, and the transmission of the maximum hierarchy level within the bit stream in order to save bitrate side of information, all these aspects are advantageous independently of each other. This is also true when considering the interplane scanning system, interplane scanning is advantageously independent of the exact way an image is divided into simply linked regions and is advantageously independent from the use of the merger and/or inheritance scheme. The same applies for the perks involved with mergers and inheritance. Thus, the achievements described in the following generalize the aforementioned achievements about aspects relating to inter plane adoption/forecasting. As the following embodiments represent generalizations of the above-described embodiments, many of the above-described details can be considered to be combined with the below-described embodiments. Figs. 1 and 2, particularly combined many of the aspects of the present application, which would also be advantageous when used in other applications or other coding fields. As often mentioned during the above discussion, multitree subdivision, for example, can be used without merging and/or without adoption/prediction plan and/or without inheritance. For example, transmitting the maximum block size, using the transverse order of depth, adapting the context depending on the flag hierarchy level of the respective subdivision, and transmitting the maximum hierarchy level within the bit stream in order of saving the bitrate of lateral information, all these aspects are advantageous independent of each other. This is also true when considering the merge scheme. Merging is advantageously independent of the exact way an image is subdivided into simply linked regions and is advantageously independent of the existence of more than one sample arrangement or the use of the adoption/prediction and/or inheritance plan. The same applies for the advantages involved with inter-plane adoption/prediction and inheritance. Therefore, the embodiments described below generalize the aforementioned embodiments in relation to aspects relating to fusion. As the following embodiments represent generalizations of the embodiments described above, many of the details described above can be considered to be combined with the embodiments described below. Fig. 17 shows decoder modules for decoding a data stream representing different information components spatially sampled from an image of a scene in planes, each plane comprising a series of information samples. The decoder can correspond to the one shown in fig. 2. In particular, a module 700 is responsible for reconstructing each array 502-506 of information samples by processing load data, such as residual data or spectral decomposition data, associated with the simply linked regions in which each array 502- 506 of the information samples is subdivided in a prescribed manner by encoding parameters associated with the simply linked regions, such as prediction parameters. This module is, for example, incorporated by all blocks besides block 102, in the case of the decoder in fig. 2. However, the decoder in Fig. 17 need not be a hybrid decoder. Inter and/or intra-prediction cannot be used. The same applies to transform encoding, that is, the residue can be encoded in the spatial domain, rather than transformed by two-dimensional spectral decomposition. Another module 702 is responsible for obtaining the encoding parameters associated with the simply linked regions of a first array such as an array 506 of information sample arrays from the data stream. Thus, module 702 defines a task which is an arrangement for executing the task of module 700. In the case of fig. 2, extractor 102 takes responsibility for the task of module 702. It should be noted that arrangement 506 itself may be a secondary arrangement, the coding parameters associated therewith may have been obtained through interplane adoption/prediction. . A next module 704 is for deriving interplane information exchange for the simply linked regions of a second array 504 of the information sample arrays from the data stream. In the case of fig. 2, extractor 102 takes responsibility for the task of module 702. A next module 706 is to, depending on the interplane exchange information for the simply linked regions of the second array, decide, for each simply linked region or an appropriate subset of the simply linked regions of the second array, which of the next modules 708 and 710 are active. In the case of fig. 2, the extractor 102 cooperates with the subdivider 104 in order to carry out the task of the module 706. The subdivider controls the order in which the simply linked regions are traversed, i.e. the interplane part reports the exchange of information to that the regions simply linked, while the extractor 102 performs the actual extraction. In the more detailed modalities above, the inter-plan exchange information defined for each region is simply linked individually to see if the inter-plan adoption/prediction will be realized. However, this need not be the case. It is also advantageous if the decision is made in units of appropriate subsets of the simply linked regions. For example, the interplane interchange information may define one or more simply linked regions each of which is composed of one or a plurality of neighboring simply linked regions, and for each of these larger regions an interplane adoption/prediction is performed. Module 708 is for deriving the coding parameters for the respective simply linked region or the appropriate subset of the simply linked regions of the second array 540, at least partially, from the coding parameters of a corresponding locally simply linked region of the first array 506 where this task is carried out, in the case of fig. 2, by the extractor in cooperation with subdivider 104, which is responsible for deriving the co-location relationship, and decoding the charge data associated with the respective simply linked region or the appropriate subset of the simply linked regions of the second array in a way prescribed by the thus derived encoding parameters, this task, in turn, is performed by other modules of fig. 2, ie 106-114. As an alternative to module 708, module 710 is therefore, while ignoring the coding parameters for the corresponding locally simply linked region of the first set 506, derived from the coding parameters for the respective simply linked region or the appropriate subset of the simply linked regions of the second array 504 from the data stream, this task of extractor 102 of FIG. 2 takes responsibility for decoding the payload data associated with the respective simply linked region or the appropriate subset of the simply linked regions of the second array in a manner prescribed by the associated encoding parameters derived from the data stream, where this task in turn , is performed by other modules of fig. 2, ie 106-114, under the control of subdivider 104 which, as always, is responsible for managing the neighborhood relationship and co-location between the simply linked regions. As described above in relation to Figs. 1 to 16, the arrays of information samples do not necessarily represent a video image or a still image or a color component. Sample arrays can also represent two other dimensions of the physical data sample, such as a depth map or a transparency map of a scene. The load data associated with each of the plurality of simply linked regions can, as discussed above, comprise residual data in the spatial domain or in a transformation domain, such as transformation coefficients and a meaningful map to identify positions of the significant transformation coefficients within a transformation block corresponding to a residual block. Generally speaking, load data can be data describing its spatially associated region simply linked, either in the spatial domain or in a spectral domain and either directly or as a residual prediction of some kind of the same, for example. Encoding parameters, in turn, are not restricted to prediction parameters. The encoding parameters could indicate a transformation used to transform the payload data, or could define the filter to be used in reconstructing individual regions simply linked by reconstructing the sample arrangement of the information. As described above, the simply linked regions into which the information sample set is subdivided can derive from a multitree subdivision and can be square or rectangular in shape. Furthermore, the embodiments described specifically for subdividing a sample array are merely specific embodiments and other subdivisions may also be used. Some possibilities are shown in Fig. 18a-c. Fig. 18a, for example, shows the subdivision of a sample array 606 into a regular two-dimensional array of blocks of non-overlapping trees 608 that contact each other, with some of which being subdivided in accordance with subblocks of the multitree structure 610 of different sizes. As mentioned above, although a quadtree subdivision is illustrated in Fig. 18a, a,f partitioning of each parent node into any other number of descendant nodes is also possible. Fig. 18b shows an embodiment according to which a sample array 606 is subdivided into sub-blocks of different sizes by applying a multitree subdivision directly onto the entire pixel array 606. That is, the array integer pixel 606 is treated as the tree block. Fig. 18c shows another embodiment. According to this embodiment, the sample array is structured in a regular two-dimensional array of square or rectangular shaped macroblocks that contact each other and each of them is individually macroblocks 612 associated with the partitioning information according to which a macroblock 612 is left unpartitioned or is partitioned into a regular two-dimensional array of blocks of a size indicated by the partitioning information. As can be seen, all subdivisions of Figs. 18a-18c lead to a subdivision of the sample arrangement 606 into regions that are simply connected, in exemplary fashion, in accordance with the embodiments of Figs. 18a-18c, no overlap. However, several alternatives are possible. For example, blocks can overlap each other. The overlap may, however, be limited such that each block has a portion not overlapped by any neighboring block, or such that each of the blocks' samples is overlapped by at most one block between the adjacent arranged blocks. in juxtaposition to the current block along a predetermined direction. The latter means that the left and right neighboring blocks can overlap the current block so as to completely cover the current block, but cannot overlap each other, and the same applies for neighbors in vertical and diagonal direction. As another alternative to Fig. 17, the decision in module 606 and hence the granularity at which the interplane adoption/prediction is performed can be flat. Thus, according to a further embodiment, no more than two planes, a master plane and two secondary planes are possible, and the 606 module decides, and the inter-plane exchange information within the data stream indicates, to each possible secondary plan separately, how to know if inter-plan adoption/prediction applies for the respective plan. If so, further handling can be performed in simply linked region, as described above, wherein, however, the interplane interchange information only exists, and is being processed, within those planes indicated by the interplane interchange information. Although some aspects have been described in the context of an apparatus, it is clear that these aspects also represent a description of the corresponding method, where a block or device corresponds to a method step, or a one-step method characteristic. Similarly, aspects described in the context of a method step also represent a description of a corresponding block or item or feature of a corresponding apparatus. Some or all of the process steps can be performed by (or with) a hardware device, such as a microprocessor, a programmable computer, or an electronic circuit. In some embodiments, some of one or more major method steps may be performed by such an apparatus. The inventive encoded/compressed signals can be stored on a digital storage medium or can be transmitted on a transmission medium such as a wireless transmission medium or a wired transmission medium such as the Internet. Depending on implementation requirements, certain embodiments of the invention can be implemented in hardware or software. The implementation can be carried out using a digital storage medium, for example, a floppy disk, a DVD, a Blu-Ray, a CD, a ROM, a PROM, an EPROM, an EEPROM or a flash memory, presenting control signals of electronic reading stored in it, which cooperate (or are able to cooperate) with a programmable computer system so that the respective method is carried out. Therefore, the digital storage medium can be computer readable. Some embodiments according to the invention comprise a data carrier with electronically readable control signals, which are capable of cooperating with a programmable computer system, so that one of the methods described herein is carried out. Generally, embodiments of the present invention may be implemented as a computer program product with a program code, the program code being operative to carry out one of the methods in which the computer program product is executed in a computer. Program code, for example, can be stored on an optically readable carrier. Other embodiments include the computer program for executing one of the methods described herein, stored in a machine-readable carrier. In other words, an embodiment of the method of the invention is therefore a computer program with program code for performing one of the methods described herein, when the computer program is executed on a computer. Another embodiment of the methods of the invention is therefore a data carrier (or a digital storage medium, or a computer readable medium), which includes, recorded therein, the computer program for carrying out one of the methods described herein. . Another embodiment of the method of the invention is therefore a data stream or a sequence of signals representing the computer program for carrying out one of the methods described herein. The data stream or signal sequence' can, for example, be configured to be transferred via a data communication link, for example via the Internet. An embodiment further comprises a processing means, for example a computer or a programmable logic device, configured for or adapted to perform one of the methods described herein. Another embodiment comprises a computer having a computer program installed therein for executing one of the methods described herein. In some embodiments, a programmable logic device (e.g., a field programmable gate array) may be used to perform some or all of the functionality of the methods described herein. In some embodiments, a programmable field gate arrangement can cooperate with a microprocessor to perform one of the methods described herein. Generally, the methods are preferably performed by any hardware device. The embodiments described above are merely illustrative of the principles of the present invention. It is understood that modifications and variations to the arrangements and details described herein will be evident to others skilled in the art. It is intended, therefore, to be limited only by the scope of the appended patent claims and not by the specific details given by way of description and explanation of the embodiments of the present invention.
权利要求:
Claims (22) [0001] 1. Decoder for decoding a stream of data encoded with a video that includes pictures of a scene represented by multiple arrays of information samples, characterized in that it comprises an extractor configured to extract, from the stream of data, a first set of encoding parameters which includes a first intracoding parameter associated with a first encoding block in a first array of information samples representing a first color component of the video, wherein the first set of encoding parameters is to be used to reconstruct the first coding block in an intracoding mode, extracting, from the data stream, the interplane interchange information associated with a second coding block in a second array of information samples representing a second color component of the video, where the interplane exchange information signals if a second parameter The intracoding portion of a second set of encoding parameters used to reconstruct the second encoding block in the intracoding mode should be derived based on the first intracoding parameter, and responsive to a determination based on the interplane interchange information that the second parameter of intracoding should not be derived from the first intracoding parameter, extracting, from the data stream, the second set of encoding parameters that includes the second intracoding parameter, and a predictor configured to copy, responsive to a determination based on the interplane interchange information that the second intracoding parameter must be derived from the first intracoding parameter, the first intracoding parameter as the second intracoding parameter such that the first and second intracoding parameters are the same, where a spatial_re solution of the first array is twice a spatial_resolution of the second array, the first intracoding parameter indicates an intraprediction direction angle used in intracoding mode in the first array and the second intracoding parameter indicates an intraprediction direction angle used in the first array mode. intracoding in the second arrangement, and predicting the second encoding block based on the second set of encoding parameters which includes the second intracoding parameter to generate a second predicted encoding block based on the intracoding mode, wherein the first and second arrays of information samples represent different types of information from the spatially sampled scene. [0002] Decoder according to claim 1, characterized in that the extractor is further configured to extract, from the data stream according to the second set of encoding parameters, the residual information associated with the second encoding block. [0003] The decoder of claim 2, further comprising a reconstructor configured to reconstruct the second encoding block based on the predicted second encoding block and the residual information associated with the second encoding block. [0004] Decoder according to claim 1, characterized in that the different types of spatially sampled information include information on brightness, color, depth or transparency of the scene. [0005] A decoder according to claim 1, characterized in that the encoding parameters in the first and second encoding parameter sets include at least one of the motion parameters for the interprediction and subdivision information that specifies how an encoding block should be subdivided. [0006] 6. Method for decoding a video stream encoded data that includes pictures of a scene represented by multiple arrays of information samples, characterized in that it comprises extracting, from the data stream, a first set of encoding parameters that includes a first intracoding parameter associated with a first encoding block in a first array of information samples representing a first color component of the video, wherein the first set of encoding parameters is to be used to reconstruct the first encoding block in an intracoding mode, extracting, from the data stream, the interplane exchange information associated with a second encoding block into a second array of information samples representing a second color component of the video, wherein the information interplane exchange signals whether a second intracoding parameter of a second the set of encoding parameters used to reconstruct the second encoding block in intracoding mode must be derived based on the first intracoding parameter, responsive to a determination based on the interplane interchange information that the second intracoding parameter should not be derived from the first intracoding parameter, extracting, from the data stream, the second set of encoding parameters that includes the second intracoding parameter, copy, responsive to a determination based on the interplane exchange information that the second parameter The intracoding parameter must be derived from the first intracoding parameter, the first intracoding parameter as the second intracoding parameter such that the first and second intracoding parameters are equal, where a spatial_resolution of the first array is twice a spatial_resolution of se second array, the first intracoding parameter indicates an intraprediction direction angle used in intracoding mode in the first array, and the second intracoding parameter indicates an intraprediction direction angle used in intracoding mode in the second array, and predicting the second block coding based on the second set of coding parameters including the second intracoding parameter to generate a second predicted coding block based on the intracoding mode, wherein the first and second arrays of information samples represent different types of information of the spatially sampled scene. [0007] The method of claim 6, further comprising extracting, from the stream of data in accordance with the second set of encoding parameters, the residual information associated with the second encoding block. [0008] The method of claim 7, further comprising reconstructing the second encoding block based on the predicted second encoding block and the residual information associated with the second encoding block. [0009] Method according to claim 6, characterized in that the different types of spatially sampled information include information on brightness, color, depth or transparency of the scene. [0010] The method of claim 6, characterized in that the encoding parameters in the first and second encoding parameter sets include at least one of the motion parameters for interprediction and subdivision information that specifies how an encoding block should be subdivided. [0011] 11. Encoder for encoding, in a data stream, a video that includes pictures of a scene represented by multiple arrays of information samples, characterized in that it comprises a data stream inserter configured to insert, in the data stream, a first set of encoding parameters that includes a first intracoding parameter associated with a first encoding block in a first array of information samples representing a first video color component, wherein the first set of encoding parameters should be used to reconstruct the first encoding block in an intracoding mode, inserting, into the data stream, the interplane interchange information associated with a second encoding block into a second array of information samples representing a second color component of the video, in which the interplane exchange information signals a second parameter of intracoding a second set of encoding parameters used to reconstruct the second encoding block in the intracoding mode shall be derived based on the first intracoding parameter, and responsive to a determination based on the interplane interchange information that the second parameter of intracoding should not be derived from the first intracoding parameter, inserting, in the data stream, the second set of encoding parameters that includes the second intracoding parameter, and a predictor configured to copy, responsive to a determination based on the interplane exchange information that the second intracoding parameter must be derived from the first intracoding parameter, the first intracoding parameter as the second intracoding parameter such that the first and second intracoding parameters are the same, where one spatial_resolution of the first array is twice a spatial_resolution of the second array, the first intracoding parameter indicates an intraprediction direction angle used in the intracoding mode in the first array and the second intracoding parameter indicates an intraprediction direction angle used in the intracoding mode in the first array. second arrangement, and predicting the second encoding block based on the second set of encoding parameters which includes the second intracoding parameter to generate a second predicted encoding block based on the intracoding mode, wherein the first and second arrays of information samples represent different types of information from the spatially sampled scene. [0012] Encoder according to claim 11, characterized in that the stream data inserter is further configured to insert, into the stream of data according to the second set of encoding parameters, the residual information associated with the second block. of coding. [0013] The encoder of claim 12, further comprising a reconstructor configured to reconstruct the second encoding block based on the predicted second encoding block and the residual information associated with the second encoding block. [0014] Encoder according to claim 11, characterized in that the different types of spatially sampled information include information on brightness, color, depth or transparency of the scene. [0015] The encoder of claim 11, characterized in that the encoding parameters in the first and second encoding parameter sets include at least one of the motion parameters for the interprediction and subdivision information that specifies how an encoding block should be subdivided. [0016] 16. Method for encoding, in a data stream, a video that includes pictures of a scene represented by multiple arrays of information samples, characterized in that it comprises inserting, in the data stream, a first set of encoding parameters that includes a first intracoding parameter associated with a first encoding block in a first array of information samples representing a first video color component, wherein the first set of encoding parameters is to be used to reconstruct the first encoding block into in an intracoding mode, inserting, into the data stream, the interplane interchange information associated with a second coding block into a second array of information samples representing a second video color component, wherein the interplane interchange information signals whether a second intracoding parameter from a second set of c parameters The encoding used to reconstruct the second encoding block in the intracoding mode must be derived based on the first intracoding parameter, responsive to a determination based on the interplane interchange information that the second intracoding parameter should not be derived from the first parameter of intracoding, inserting, into the data stream, the second set of encoding parameters that includes the second intracoding parameter, copy, responsive to a determination based on the interplane interchange information that the second intracoding parameter is to be derived from of the first intracoding parameter, the first intracoding parameter as the second intracoding parameter such that the first and second intracoding parameters are equal, where a spatial_resolution of the first array is twice a spatial_resolution of the second array, the first parameter of intracoding indicates an intraprediction direction angle used in intracoding mode in the first array and the second intracoding parameter indicates an intraprediction direction angle used in intracoding mode in the second array, and predicting the second coding block based on the second set of coding parameters including the second intracoding parameter for generating a second predicted coding block based on the intracoding mode, wherein the first and second arrays of information samples represent different types of information from the spatially sampled scene. [0017] The method of claim 16, further comprising inserting, into the data stream according to the second set of encoding parameters, the residues associated with the second encoding block. [0018] The method of claim 17, further comprising reconstructing the second encoding block based on the predicted second encoding block and the residual information associated with the second encoding block. [0019] A method according to claim 16, characterized in that the different types of spatially sampled information include information on brightness, color, depth or transparency of the scene. [0020] The method of claim 16, characterized in that the encoding parameters in the first and second encoding parameter sets include at least one of the motion parameters for the interprediction and subdivision information that specifies how an encoding block should be subdivided. [0021] 21. Stream of data related to a video that includes pictures of a scene represented by multiple arrays of information samples, the stream of data characterized by comprising a first set of encoding parameters that includes a first intracoding parameter associated with a first coding block in a first array of information samples representing a first color component of the video, wherein the first set of coding parameters is to be used to reconstruct the first coding block in an intracoding mode, exchange information interplanes associated with a second encoding block in a second array of information samples representing a second color component of the video, wherein the interplane exchange information signals whether a second intracoding parameter of a second set of encoding parameters used for rebuild the second coding block n the intracoding mode must be derived based on the first intracoding parameter, and the second set of encoding parameters which includes a second intracoding parameter associated with the second encoding block when the interplane exchange information signals that the second intracoding parameter should not be derived from the first intracoding parameter, where, if the interplane interchange information signals that the second intracoding parameter is to be derived from the first intracoding parameter, the first intracoding parameter is copied as the second parameter of intracoding such that the first and second intracoding parameters are equal, where a spatial_resolution of the first array is twice a spatial_resolution of the second array, the first intracoding parameter indicates an intraprediction direction angle used in intracoding mode in the first array and the second intracoding parameter indicates an intraprediction direction angle used in the intracoding mode in the second array, where the second encoding block is predicted based on the second set of encoding parameters that includes the second intracoding parameter to generate a predicted second encoding block on the basis of which residues associated with the second encoding block are to be derived, and the first and second arrays of information samples representing different types of information from the spatially sampled scene. [0022] The continuous data stream of claim 21, further comprising the residues associated with the inserted second encoding block in accordance with the second encoding parameter set.
类似技术:
公开号 | 公开日 | 专利标题 BR112012026400B1|2021-08-10|INTER-PLANE PREDICTION BR122020008236B1|2021-02-17|inheritance in a multitree subdivision arrangement sample ES2752227T3|2020-04-03|Video encoding using multi-tree image subdivisions BR112012026393A2|2020-04-14|sample region fusion BR112012026383B1|2021-12-07|DECODER, DECODING METHOD, ENCODER, ENCODING METHOD AND DIGITAL STORAGE MEDIA CN105915924B|2019-12-06|Cross-plane prediction
同族专利:
公开号 | 公开日 EP2559240B1|2019-07-10| KR102210983B1|2021-02-02| BR122020007923B1|2021-08-03| KR102282803B1|2021-07-28| EP3614667A1|2020-02-26| BR112012026400A2|2018-03-13| KR102355155B1|2022-01-24| CN106412608B|2019-10-08| CN105959703A|2016-09-21| US20180288422A1|2018-10-04| KR101584480B1|2016-01-14| CN102939750A|2013-02-20| US10848767B2|2020-11-24| HUE045579T2|2020-01-28| CN106067984B|2020-03-03| KR101752458B1|2017-07-03| KR101447796B1|2014-10-07| CN106101724A|2016-11-09| KR20210008152A|2021-01-20| CN106412608A|2017-02-15| US20190089962A1|2019-03-21| US10855990B2|2020-12-01| CN106412607B|2020-09-01| BR122020010438B1|2021-08-31| CN106412606A|2017-02-15| KR20200055810A|2020-05-21| BR122020007922B1|2021-08-31| US20160309169A1|2016-10-20| WO2011127966A1|2011-10-20| CN102939750B|2016-07-06| US10855995B2|2020-12-01| EP3709641A1|2020-09-16| KR20140071506A|2014-06-11| US20200084458A1|2020-03-12| EP2559240A1|2013-02-20| CN106412606B|2020-03-27| US10873749B2|2020-12-22| CN106067985A|2016-11-02| US20130034171A1|2013-02-07| KR102205041B1|2021-01-19| KR102080450B1|2020-02-21| KR20150015026A|2015-02-09| US20210029365A1|2021-01-28| BR122020007921B1|2021-08-03| KR20190032634A|2019-03-27| JP2013528023A|2013-07-04| US10855991B2|2020-12-01| KR101962084B1|2019-03-25| CN106101724B|2019-09-06| KR20130006690A|2013-01-17| CN106412607A|2017-02-15| KR20170076811A|2017-07-04| CN106067984A|2016-11-02| JP5846611B2|2016-01-20| PT2559240T|2019-10-15| PL2559240T3|2020-01-31| US20210176473A1|2021-06-10| KR20220012428A|2022-02-03| CN106067983A|2016-11-02| KR20180075705A|2018-07-04| DK2559240T3|2019-10-07| KR20160017135A|2016-02-15| KR101874272B1|2018-07-03| KR101592802B1|2016-02-05| ES2746182T3|2020-03-05| KR20200019270A|2020-02-21| CN105959703B|2019-06-04| KR20210013662A|2021-02-04| US20200366906A1|2020-11-19| CN106067985B|2019-06-28| CN106067983B|2019-07-12|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 FR2633468B1|1988-06-24|1990-11-09|France Etat|METHOD OF ENCODING ASSISTANCE DATA FOR THE RECONSTRUCTION OF SUB-SAMPLE ANIMATED ELECTRONIC IMAGES| US5809270A|1992-06-30|1998-09-15|Discovision Associates|Inverse quantizer| US7095783B1|1992-06-30|2006-08-22|Discovision Associates|Multistandard video decoder and decompression system for processing encoded bit streams including start codes and methods relating thereto| US5784631A|1992-06-30|1998-07-21|Discovision Associates|Huffman decoder| US6408097B1|1993-08-30|2002-06-18|Sony Corporation|Picture coding apparatus and method thereof| US5446806A|1993-11-15|1995-08-29|National Semiconductor Corporation|Quadtree-structured Walsh transform video/image coding| CA2145361C|1994-03-24|1999-09-07|Martin William Sotheran|Buffer manager| WO1997015146A1|1995-10-18|1997-04-24|Philips Electronics N.V.|Method of encoding video images| US6084908A|1995-10-25|2000-07-04|Sarnoff Corporation|Apparatus and method for quadtree based variable block size motion estimation| TW346571B|1996-02-06|1998-12-01|Matsushita Electric Ind Co Ltd|Data reception apparatus, data transmission apparatus, information processing system, data reception method| US6005981A|1996-04-11|1999-12-21|National Semiconductor Corporation|Quadtree-structured coding of color images and intra-coded images| DE19615493A1|1996-04-19|1997-10-23|Philips Patentverwaltung|Image segmentation method| US6639945B2|1997-03-14|2003-10-28|Microsoft Corporation|Method and apparatus for implementing motion detection in video compression| US6057884A|1997-06-05|2000-05-02|General Instrument Corporation|Temporal and spatial scaleable coding for video object planes| US6269192B1|1997-07-11|2001-07-31|Sarnoff Corporation|Apparatus and method for multiscale zerotree entropy encoding| CN1882093B|1998-03-10|2011-01-12|索尼公司|Transcoding system using encoding history information| US6067574A|1998-05-18|2000-05-23|Lucent Technologies Inc|High speed routing using compressed tree process| US6269175B1|1998-08-28|2001-07-31|Sarnoff Corporation|Method and apparatus for enhancing regions of aligned images using flow estimation| US6563953B2|1998-11-30|2003-05-13|Microsoft Corporation|Predictive image compression using a single variable length code for both the luminance and chrominance blocks for each macroblock| US7085319B2|1999-04-17|2006-08-01|Pts Corporation|Segment-based encoding system using segment hierarchies| JP2000350207A|1999-06-08|2000-12-15|Matsushita Electric Ind Co Ltd|Generalized orthogonal transform method and device for low resolution video decoding| FI116992B|1999-07-05|2006-04-28|Nokia Corp|Methods, systems, and devices for enhancing audio coding and transmission| WO2001031497A1|1999-10-22|2001-05-03|Activesky, Inc.|An object oriented video system| JP3957937B2|1999-12-21|2007-08-15|キヤノン株式会社|Image processing apparatus and method, and storage medium| AU3027301A|2000-01-21|2001-07-31|Nokia Mobile Phones Ltd|A motion estimation method and a system for a video coder| FI116819B|2000-01-21|2006-02-28|Nokia Corp|Procedure for transferring images and an image encoder| US6910001B2|2000-03-22|2005-06-21|Schlumberger Technology Corp.|Distributed multiresolution geometry modeling system and method| US6785423B1|2000-05-26|2004-08-31|Eastman Kodak Company|Producing a compressed digital image organized into layers having information relating to different viewing conditions and resolutions| JP2004503964A|2000-06-14|2004-02-05|コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ|Color video encoding and decoding method| JP2004505520A|2000-07-25|2004-02-19|コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ|Video coding method using wavelet decomposition| AUPR063400A0|2000-10-06|2000-11-02|Canon Kabushiki Kaisha|Xml encoding scheme| US7929610B2|2001-03-26|2011-04-19|Sharp Kabushiki Kaisha|Methods and systems for reducing blocking artifacts with reduced complexity for spatially-scalable video coding| JP2003018602A|2001-04-24|2003-01-17|Monolith Co Ltd|Method and device for encoding and decoding image data| US6987866B2|2001-06-05|2006-01-17|Micron Technology, Inc.|Multi-modal motion estimation for video sequences| US7483581B2|2001-07-02|2009-01-27|Qualcomm Incorporated|Apparatus and method for encoding digital image data in a lossless manner| US7450641B2|2001-09-14|2008-11-11|Sharp Laboratories Of America, Inc.|Adaptive filtering based upon boundary strength| US7643559B2|2001-09-14|2010-01-05|Ntt Docomo, Inc.|Coding method, decoding method, coding apparatus, decoding apparatus, image processing system, coding program, and decoding program| US6950469B2|2001-09-17|2005-09-27|Nokia Corporation|Method for sub-pixel value interpolation| EP1442600B1|2001-10-16|2010-04-28|Koninklijke Philips Electronics N.V.|Video coding method and corresponding transmittable video signal| WO2003043345A1|2001-11-16|2003-05-22|Ntt Docomo, Inc.|Image encoding method, image decoding method, image encoder, image decode, program, computer data signal, and image transmission system| US7295609B2|2001-11-30|2007-11-13|Sony Corporation|Method and apparatus for coding image information, method and apparatus for decoding image information, method and apparatus for coding and decoding image information, and system of coding and transmitting image information| EP1324615A1|2001-12-28|2003-07-02|Deutsche Thomson-Brandt Gmbh|Transcoding MPEG bitstreams for adding sub-picture content| KR20030065606A|2002-01-30|2003-08-09|양송철|Multi level structured system of bonus accumulation and circulation using individual automatic independent code and its operation| CN101127899B|2002-04-12|2015-04-01|三菱电机株式会社|Hint information description method| US20030198290A1|2002-04-19|2003-10-23|Dynamic Digital Depth Pty.Ltd.|Image encoding system| US7433526B2|2002-04-30|2008-10-07|Hewlett-Packard Development Company, L.P.|Method for compressing images and image sequences through adaptive partitioning| US7154952B2|2002-07-19|2006-12-26|Microsoft Corporation|Timestamp-independent motion vector prediction for predictive and bidirectionally predictive pictures| AU2003242037A1|2002-07-02|2004-01-23|Matsushita Electric Industrial Co., Ltd.|Image encoding method and image decoding method| US6975773B1|2002-07-30|2005-12-13|Qualcomm, Incorporated|Parameter selection in data compression and decompression| JP3950777B2|2002-09-30|2007-08-01|キヤノン株式会社|Image processing method, image processing apparatus, and image processing program| US7266247B2|2002-09-30|2007-09-04|Samsung Electronics Co., Ltd.|Image coding method and apparatus using spatial predictive coding of chrominance and image decoding method and apparatus| JP2004135252A|2002-10-09|2004-04-30|Sony Corp|Encoding processing method, encoding apparatus, and decoding apparatus| US7254533B1|2002-10-17|2007-08-07|Dilithium Networks Pty Ltd.|Method and apparatus for a thin CELP voice codec| EP1431919B1|2002-12-05|2010-03-03|Samsung Electronics Co., Ltd.|Method and apparatus for encoding and decoding three-dimensional object data by using octrees| US20070036215A1|2003-03-03|2007-02-15|Feng Pan|Fast mode decision algorithm for intra prediction for advanced video coding| US7366352B2|2003-03-20|2008-04-29|International Business Machines Corporation|Method and apparatus for performing fast closest match in pattern recognition| US7643558B2|2003-03-24|2010-01-05|Qualcomm Incorporated|Method, apparatus, and system for encoding and decoding side information for multimedia transmission| AT336763T|2003-03-28|2006-09-15|Digital Accelerator Corp|TRANSFORMATION BASED REMAINING FRAME MOVEMENT OVERCOMPLETE BASIC CODING PROCESS AND ASSOCIATED VIDEO COMPRESSION DEVICE| HU0301368A3|2003-05-20|2005-09-28|Amt Advanced Multimedia Techno|Method and equipment for compressing motion picture data| CN101616329B|2003-07-16|2013-01-02|三星电子株式会社|Video encoding/decoding apparatus and method for color image| EP1509045A3|2003-07-16|2006-08-09|Samsung Electronics Co., Ltd.|Lossless image encoding/decoding method and apparatus using intercolor plane prediction| US7010044B2|2003-07-18|2006-03-07|Lsi Logic Corporation|Intra 4×4 modes 3, 7 and 8 availability determination intra estimation and compensation| FR2858741A1|2003-08-07|2005-02-11|Thomson Licensing Sa|DEVICE AND METHOD FOR COMPRESSING DIGITAL IMAGES| CN1322472C|2003-09-08|2007-06-20|中国人民解放军第一军医大学|Quad tree image compressing and decompressing method based on wavelet conversion prediction| JP4677901B2|2003-10-29|2011-04-27|日本電気株式会社|Decoding apparatus or encoding apparatus in which intermediate buffer is inserted between arithmetic code decoder or encoder and inverse binarization converter or binarization converter| KR20050045746A|2003-11-12|2005-05-17|삼성전자주식회사|Method and device for motion estimation using tree-structured variable block size| US7418455B2|2003-11-26|2008-08-26|International Business Machines Corporation|System and method for indexing weighted-sequences in large databases| KR100556911B1|2003-12-05|2006-03-03|엘지전자 주식회사|Video data format for wireless video streaming service| US7599435B2|2004-01-30|2009-10-06|Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V.|Video frame encoding and decoding| US7649539B2|2004-03-10|2010-01-19|Microsoft Corporation|Image formats for video capture, processing and display| CN1691087B|2004-04-26|2011-07-06|图形安全系统公司|System and method for decoding digital coding image| CN1281065C|2004-05-20|2006-10-18|复旦大学|Tree-structure-based grade tree aggregation-divided video image compression method| US20070230574A1|2004-05-25|2007-10-04|Koninklijke Philips Electronics N.C.|Method and Device for Encoding Digital Video Data| US20060002474A1|2004-06-26|2006-01-05|Oscar Chi-Lim Au|Efficient multi-block motion estimation for video compression| CN1812579B|2004-06-27|2010-04-21|苹果公司|Efficient use of storage in encoding and decoding video data stream| US7292257B2|2004-06-28|2007-11-06|Microsoft Corporation|Interactive viewpoint video system and process| CN1268136C|2004-07-02|2006-08-02|上海广电(集团)有限公司中央研究院|Frame field adaptive coding method based on image slice structure| KR100657268B1|2004-07-15|2006-12-14|학교법인 대양학원|Scalable encoding and decoding method of color video, and apparatus thereof| CN101124589A|2004-08-09|2008-02-13|图形安全系统公司|System and method for authenticating objects using multiple-level encoding and decoding| CN1589022A|2004-08-26|2005-03-02|中芯联合(北京)微电子有限公司|Macroblock split mode selecting method in multiple mode movement estimation decided by oriented tree| DE102004059993B4|2004-10-15|2006-08-31|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|Apparatus and method for generating a coded video sequence using interlayer motion data prediction, and computer program and computer readable medium| CN101416149A|2004-10-21|2009-04-22|索尼电子有限公司|Supporting fidelity range extensions in advanced video codec file format| CN1780278A|2004-11-19|2006-05-31|松下电器产业株式会社|Self adaptable modification and encode method and apparatus in sub-carrier communication system| US20060120454A1|2004-11-29|2006-06-08|Park Seung W|Method and apparatus for encoding/decoding video signal using motion vectors of pictures in base layer| KR100703734B1|2004-12-03|2007-04-05|삼성전자주식회사|Method and apparatus for encoding/decoding multi-layer video using DCT upsampling| WO2006058921A1|2004-12-03|2006-06-08|Thomson Licensing|Method for scalable video coding| KR101138392B1|2004-12-30|2012-04-26|삼성전자주식회사|Color image encoding and decoding method and apparatus using a correlation between chrominance components| US7970219B2|2004-12-30|2011-06-28|Samsung Electronics Co., Ltd.|Color image encoding and decoding method and apparatus using a correlation between chrominance components| US20060153300A1|2005-01-12|2006-07-13|Nokia Corporation|Method and system for motion vector prediction in scalable video coding| US20060153295A1|2005-01-12|2006-07-13|Nokia Corporation|Method and system for inter-layer prediction mode coding in scalable video coding| CN101204092B|2005-02-18|2010-11-03|汤姆森许可贸易公司|Method for deriving coding information for high resolution images from low resolution images and coding and decoding devices implementing said method| CN101213840B|2005-02-18|2011-02-02|汤姆森许可贸易公司|Method for deriving coding information for high resolution pictures from low resolution pictures and coding and decoding devices implementing said method| JP4504230B2|2005-03-02|2010-07-14|株式会社東芝|Moving image processing apparatus, moving image processing method, and moving image processing program| TWI259727B|2005-03-09|2006-08-01|Sunplus Technology Co Ltd|Method for rapidly determining macroblock mode| US7961963B2|2005-03-18|2011-06-14|Sharp Laboratories Of America, Inc.|Methods and systems for extended spatial scalability with picture-level adaptation| EP1711018A1|2005-04-08|2006-10-11|Thomson Licensing|Method and apparatus for encoding video pictures, and method and apparatus for decoding video pictures| US20060233262A1|2005-04-13|2006-10-19|Nokia Corporation|Signaling of bit stream ordering in scalable video coding| KR101246915B1|2005-04-18|2013-03-25|삼성전자주식회사|Method and apparatus for encoding or decoding moving picture| KR100746007B1|2005-04-19|2007-08-06|삼성전자주식회사|Method and apparatus for adaptively selecting context model of entrophy coding| KR100763181B1|2005-04-19|2007-10-05|삼성전자주식회사|Method and apparatus for improving coding rate by coding prediction information from base layer and enhancement layer| EP1880364A1|2005-05-12|2008-01-23|Bracco Imaging S.P.A.|Method for coding pixels or voxels of a digital image and a method for processing digital images| EP1908292A4|2005-06-29|2011-04-27|Nokia Corp|Method and apparatus for update step in video coding using motion compensated temporal filtering| JP4444180B2|2005-07-20|2010-03-31|株式会社東芝|Texture encoding apparatus, texture decoding apparatus, method, and program| RU2406253C2|2005-07-21|2010-12-10|Томсон Лайсенсинг|Method and device for weighted prediction for scalable video signal coding| CA2732532C|2005-07-22|2013-08-20|Mitsubishi Electric Corporation|Image decoder that decodes a color image signal and related method| US9113147B2|2005-09-27|2015-08-18|Qualcomm Incorporated|Scalability techniques based on content information| WO2007036759A1|2005-09-29|2007-04-05|Telecom Italia S.P.A.|Method for scalable video coding| WO2007047271A2|2005-10-12|2007-04-26|Thomson Licensing|Methods and apparatus for weighted prediction in scalable video encoding and decoding| KR100763196B1|2005-10-19|2007-10-04|삼성전자주식회사|Method for coding flags in a layer using inter-layer correlation, method for decoding the coded flags, and apparatus thereof| EP1946563A2|2005-10-19|2008-07-23|Thomson Licensing|Multi-view video coding using scalable video coding| KR100873636B1|2005-11-14|2008-12-12|삼성전자주식회사|Method and apparatus for encoding/decoding image using single coding mode| RU2340114C1|2005-11-18|2008-11-27|Сони Корпорейшн|Coding device and method, decoding device and method and transmission system| KR100717055B1|2005-11-18|2007-05-10|삼성전자주식회사|Method of decoding bin values using pipeline architecture, and decoding apparatus therefor| GB0600141D0|2006-01-05|2006-02-15|British Broadcasting Corp|Scalable coding of video signals| WO2007077116A1|2006-01-05|2007-07-12|Thomson Licensing|Inter-layer motion prediction method| EP1977607A4|2006-01-09|2014-12-17|Lg Electronics Inc|Inter-layer prediction method for video signal| KR20070074451A|2006-01-09|2007-07-12|엘지전자 주식회사|Method for using video signals of a baselayer for interlayer prediction| US8315308B2|2006-01-11|2012-11-20|Qualcomm Incorporated|Video coding with fine granularity spatial scalability| US8861585B2|2006-01-20|2014-10-14|Qualcomm Incorporated|Method and apparatus for error resilience algorithms in wireless video communication| US7929608B2|2006-03-28|2011-04-19|Sony Corporation|Method of reducing computations in intra-prediction and mode decision processes in a digital video encoder| CN101416399B|2006-03-31|2013-06-19|英特尔公司|Layered decoder and method for implementing layered decode| CN101047733B|2006-06-16|2010-09-29|华为技术有限公司|Short message processing method and device| KR101526914B1|2006-08-02|2015-06-08|톰슨 라이센싱|Methods and apparatus for adaptive geometric partitioning for video decoding| US20080086545A1|2006-08-16|2008-04-10|Motorola, Inc.|Network configuration using configuration parameter inheritance| CN101507280B|2006-08-25|2012-12-26|汤姆逊许可公司|Methods and apparatus for reduced resolution partitioning| CN102158697B|2006-09-07|2013-10-09|Lg电子株式会社|Method and apparatus for decoding/encoding of a video signal| CN100471275C|2006-09-08|2009-03-18|清华大学|Motion estimating method for H.264/AVC coder| CN100486336C|2006-09-21|2009-05-06|上海大学|Real time method for segmenting motion object based on H.264 compression domain| US9014280B2|2006-10-13|2015-04-21|Qualcomm Incorporated|Video coding with adaptive filtering for motion compensated prediction| WO2008049052A2|2006-10-18|2008-04-24|Apple Inc.|Scalable video coding with filtering of lower layers| US7775002B2|2006-11-10|2010-08-17|John Puchniak|Portable hurricane and security window barrier| CN101395921B|2006-11-17|2012-08-22|Lg电子株式会社|Method and apparatus for decoding/encoding a video signal| EP1933564A1|2006-12-14|2008-06-18|Thomson Licensing|Method and apparatus for encoding and/or decoding video data using adaptive prediction order for spatial and bit depth prediction| BRPI0721077A2|2006-12-28|2014-07-01|Nippon Telegraph & Telephone|CODING METHOD AND VIDEO DECODING METHOD, SAME APPARELS, SAME PROGRAMS, AND STORAGE Means WHICH STORE THE PROGRAMS| WO2008084423A1|2007-01-08|2008-07-17|Nokia Corporation|Improved inter-layer prediction for extended spatial scalability in video coding| CN101018333A|2007-02-09|2007-08-15|上海大学|Coding method of fine and classified video of space domain classified noise/signal ratio| WO2008154041A1|2007-06-14|2008-12-18|Thomson Licensing|Modifying a coded bitstream| JP2010135863A|2007-03-28|2010-06-17|Toshiba Corp|Method and device for encoding image| BRPI0809512A2|2007-04-12|2016-03-15|Thomson Licensing|context-dependent merge method and apparatus for direct jump modes for video encoding and decoding| KR20080093386A|2007-04-16|2008-10-21|한국전자통신연구원|Color video scalability encoding and decoding method and device thereof| TW200845723A|2007-04-23|2008-11-16|Thomson Licensing|Method and apparatus for encoding video data, method and apparatus for decoding encoded video data and encoded video signal| CN100515087C|2007-05-30|2009-07-15|威盛电子股份有限公司|Method and device for determining whether or not two adjacent macro zone block locate on same banded zone| KR100906243B1|2007-06-04|2009-07-07|전자부품연구원|Video coding method of rgb color space signal| CN100496129C|2007-06-05|2009-06-03|南京大学|H.264 based multichannel video transcoding multiplexing method| BRPI0810517A2|2007-06-12|2014-10-21|Thomson Licensing|METHODS AND APPARATUS SUPPORTING MULTIPASS VIDEO SYNTAX STRUCTURE FOR SECTION DATA| JP2008311781A|2007-06-12|2008-12-25|Ntt Docomo Inc|Motion picture encoder, motion picture decoder, motion picture encoding method, motion picture decoding method, motion picture encoding program and motion picture decoding program| JP4551948B2|2007-06-13|2010-09-29|シャープ株式会社|Linear light source device, surface light emitting device, planar light source device, and liquid crystal display device| US8428133B2|2007-06-15|2013-04-23|Qualcomm Incorporated|Adaptive coding of video block prediction mode| US8085852B2|2007-06-26|2011-12-27|Mitsubishi Electric Research Laboratories, Inc.|Inverse tone mapping for bit-depth scalable image coding| US8422803B2|2007-06-28|2013-04-16|Mitsubishi Electric Corporation|Image encoding device, image decoding device, image encoding method and image decoding method| CN100534186C|2007-07-05|2009-08-26|西安电子科技大学|JPEG2000 self-adapted rate control system and method based on pre-allocated code rate| US8458612B2|2007-07-29|2013-06-04|Hewlett-Packard Development Company, L.P.|Application management framework for web applications| CN101119493B|2007-08-30|2010-12-01|威盛电子股份有限公司|Coding method and device for block type digital coding image| KR20090030681A|2007-09-20|2009-03-25|삼성전자주식회사|Image processing apparatus, display apparatus, display system and control method thereof| US8374446B2|2007-09-28|2013-02-12|Vsevolod Yurievich Mokrushin|Encoding and decoding of digital signals based on compression of hierarchical pyramid| KR101403343B1|2007-10-04|2014-06-09|삼성전자주식회사|Method and apparatus for inter prediction encoding/decoding using sub-pixel motion estimation| BRPI0818344A2|2007-10-12|2015-04-22|Thomson Licensing|Methods and apparatus for encoding and decoding video of geometrically partitioned bi-predictive mode partitions| BRPI0818649A2|2007-10-16|2015-04-07|Thomson Licensing|Methods and apparatus for encoding and decoding video in geometrically partitioned superblocks.| US7777654B2|2007-10-16|2010-08-17|Industrial Technology Research Institute|System and method for context-based adaptive binary arithematic encoding and decoding| GB2454195A|2007-10-30|2009-05-06|Sony Corp|Address generation polynomial and permutation matrix for DVB-T2 16k OFDM sub-carrier mode interleaver| US8270472B2|2007-11-09|2012-09-18|Thomson Licensing|Methods and apparatus for adaptive reference filtering of bi-predictive pictures in multi-view coded video| US8540158B2|2007-12-12|2013-09-24|Yiwu Lei|Document verification using dynamic document identification framework| US20090154567A1|2007-12-13|2009-06-18|Shaw-Min Lei|In-loop fidelity enhancement for video compression| US20090165041A1|2007-12-21|2009-06-25|Penberthy John S|System and Method for Providing Interactive Content with Video Content| US8126054B2|2008-01-09|2012-02-28|Motorola Mobility, Inc.|Method and apparatus for highly scalable intraframe video coding| EP2232875A2|2008-01-11|2010-09-29|Thomson Licensing|Video and depth coding| US8155184B2|2008-01-16|2012-04-10|Sony Corporation|Video coding system using texture analysis and synthesis in a scalable coding framework| AT524927T|2008-01-21|2011-09-15|Ericsson Telefon Ab L M|PRESENTATION BASED IMAGE PROCESSING| EP2245596B1|2008-01-21|2017-07-12|Telefonaktiebolaget LM Ericsson |Prediction-based image processing| KR101291196B1|2008-01-25|2013-07-31|삼성전자주식회사|Video encoding method and apparatus, and video decoding method and apparatus| US8711948B2|2008-03-21|2014-04-29|Microsoft Corporation|Motion-compensated prediction of inter-layer residuals| US8179974B2|2008-05-02|2012-05-15|Microsoft Corporation|Multi-level representation of reordered transform coefficients| US20100220469A1|2008-05-23|2010-09-02|Altair Engineering, Inc.|D-shaped cross section l.e.d. based light| TWI373959B|2008-06-09|2012-10-01|Kun Shan University Of Technology|Wavelet codec with a function of adjustable image quality| KR101517768B1|2008-07-02|2015-05-06|삼성전자주식회사|Method and apparatus for encoding video and method and apparatus for decoding video| US8406307B2|2008-08-22|2013-03-26|Microsoft Corporation|Entropy coding/decoding of hierarchically organized data| US8750379B2|2008-09-11|2014-06-10|General Instrument Corporation|Method and apparatus for complexity-scalable motion estimation| JP5422168B2|2008-09-29|2014-02-19|株式会社日立製作所|Video encoding method and video decoding method| US8619856B2|2008-10-03|2013-12-31|Qualcomm Incorporated|Video coding with large macroblocks| US20100086031A1|2008-10-03|2010-04-08|Qualcomm Incorporated|Video coding with large macroblocks| US8503527B2|2008-10-03|2013-08-06|Qualcomm Incorporated|Video coding with large macroblocks| US8634456B2|2008-10-03|2014-01-21|Qualcomm Incorporated|Video coding with large macroblocks| CN101404774B|2008-11-13|2010-06-23|四川虹微技术有限公司|Macro-block partition mode selection method in movement search| JP5001964B2|2009-02-18|2012-08-15|株式会社エヌ・ティ・ティ・ドコモ|Image coding apparatus, method and program, and image decoding apparatus, method and program| CN101493890B|2009-02-26|2011-05-11|上海交通大学|Dynamic vision caution region extracting method based on characteristic| US8810562B2|2009-05-19|2014-08-19|Advanced Micro Devices, Inc.|Hierarchical lossless compression| US8395708B2|2009-07-21|2013-03-12|Qualcomm Incorporated|Method and system for detection and enhancement of video images| KR101456498B1|2009-08-14|2014-10-31|삼성전자주식회사|Method and apparatus for video encoding considering scanning order of coding units with hierarchical structure, and method and apparatus for video decoding considering scanning order of coding units with hierarchical structure| EP2485490B1|2009-10-01|2015-09-30|SK Telecom Co., Ltd.|Method and apparatus for encoding/decoding image using split layer| KR101457418B1|2009-10-23|2014-11-04|삼성전자주식회사|Method and apparatus for video encoding and decoding dependent on hierarchical structure of coding unit| US8594200B2|2009-11-11|2013-11-26|Mediatek Inc.|Method of storing motion vector information and video decoding apparatus| JP5475409B2|2009-11-20|2014-04-16|三菱電機株式会社|Moving picture coding apparatus and moving picture coding method| WO2011063397A1|2009-11-23|2011-05-26|General Instrument Corporation|Depth coding as an additional channel to video sequence| US8315310B2|2010-01-08|2012-11-20|Research In Motion Limited|Method and device for motion vector prediction in video transcoding using full resolution residuals| US20110170608A1|2010-01-08|2011-07-14|Xun Shi|Method and device for video transcoding using quad-tree based mode selection| KR101847072B1|2010-04-05|2018-04-09|삼성전자주식회사|Method and apparatus for video encoding, and method and apparatus for video decoding| KR101750046B1|2010-04-05|2017-06-22|삼성전자주식회사|Method and apparatus for video encoding with in-loop filtering based on tree-structured data unit, method and apparatus for video decoding with the same| KR101529992B1|2010-04-05|2015-06-18|삼성전자주식회사|Method and apparatus for video encoding for compensating pixel value of pixel group, method and apparatus for video decoding for the same| US20110249743A1|2010-04-09|2011-10-13|Jie Zhao|Super-block for high performance video coding| CN106060558B|2010-04-13|2019-08-13|Ge视频压缩有限责任公司|Decoder, the method for rebuilding array, encoder, coding method| TWI678916B|2010-04-13|2019-12-01|美商Ge影像壓縮有限公司|Sample region merging| BR122020007923B1|2010-04-13|2021-08-03|Ge Video Compression, Llc|INTERPLANE PREDICTION| DK2559245T3|2010-04-13|2015-08-24|Ge Video Compression Llc|Video Coding using multitræsunderinddeling Images| KR20110135471A|2010-06-11|2011-12-19|휴맥스|Apparatuses and methods for encoding/decoding of video using block merging| KR102277273B1|2010-10-08|2021-07-15|지이 비디오 컴프레션, 엘엘씨|Picture coding supporting block partitioning and block merging| KR101527666B1|2010-11-04|2015-06-09|프라운호퍼 게젤샤프트 쭈르 푀르데룽 데어 안겐반텐 포르슝 에. 베.|Picture coding supporting block merging and skip mode| US20120170648A1|2011-01-05|2012-07-05|Qualcomm Incorporated|Frame splitting in video coding| PT3471415T|2011-06-16|2021-11-04|Ge Video Compression Llc|Entropy coding of motion vector differences| CN106660506B|2014-07-22|2019-08-16|奥托立夫开发公司|Side air bag device|CN101589625B|2006-10-25|2011-09-21|弗劳恩霍夫应用研究促进协会|Fraunhofer ges forschung| CN106060558B|2010-04-13|2019-08-13|Ge视频压缩有限责任公司|Decoder, the method for rebuilding array, encoder, coding method| TWI678916B|2010-04-13|2019-12-01|美商Ge影像壓縮有限公司|Sample region merging| BR122020007923B1|2010-04-13|2021-08-03|Ge Video Compression, Llc|INTERPLANE PREDICTION| DK2559245T3|2010-04-13|2015-08-24|Ge Video Compression Llc|Video Coding using multitræsunderinddeling Images| DK2858366T3|2010-07-09|2017-02-13|Samsung Electronics Co Ltd|Method of decoding video using block merge| US9532059B2|2010-10-05|2016-12-27|Google Technology Holdings LLC|Method and apparatus for spatial scalability for video coding| US20120082243A1|2010-10-05|2012-04-05|General Instrument Corporation|Method and Apparatus for Feature Based Video Coding| CN105847830B|2010-11-23|2019-07-12|Lg电子株式会社|Prediction technique between being executed by encoding apparatus and decoding apparatus| EP3554079A1|2011-01-07|2019-10-16|LG Electronics Inc.|Method for encoding video information, method of decoding video information and decoding apparatus for decoding video information| JP5982734B2|2011-03-11|2016-08-31|ソニー株式会社|Image processing apparatus and method| MX2013013508A|2011-06-23|2014-02-27|Panasonic Corp|Image decoding method, image encoding method, image decoding device, image encoding device, and image encoding/decoding device.| TWI581615B|2011-06-24|2017-05-01|Sun Patent Trust|A decoding method, a coding method, a decoding device, an encoding device, and a coding / decoding device| CN108632608A|2011-09-29|2018-10-09|夏普株式会社|Picture decoding apparatus, picture decoding method and picture coding device| EP2763415B1|2011-09-29|2020-04-15|Sharp Kabushiki Kaisha|Image decoding apparatus for decoding partition information, image decoding method and image encoding apparatus| CN109257596A|2011-11-11|2019-01-22|Ge视频压缩有限责任公司|Adaptive partition coding| EP2777286B1|2011-11-11|2017-01-04|GE Video Compression, LLC|Effective wedgelet partition coding| PT2777283T|2011-11-11|2018-04-18|Ge Video Compression Llc|Effective prediction using partition coding| EP3468184A1|2011-11-11|2019-04-10|GE Video Compression, LLC|Effective wedgelet partition coding using spatial prediction| GB2502047B|2012-04-04|2019-06-05|Snell Advanced Media Ltd|Video sequence processing| PL2842313T3|2012-04-13|2017-06-30|Ge Video Compression, Llc|Scalable data stream and network entity| US20150049806A1|2012-04-16|2015-02-19|Samsung Electronics Co., Ltd.|Method for multi-view video encoding based on tree structure encoding unit and apparatus for same, and method for multi-view video decoding based on tree structure encoding unit and apparatus for same| FR2992815A1|2012-06-27|2014-01-03|France Telecom|METHOD FOR ENCODING A CURRENT BLOCK OF A FIRST IMAGE COMPONENT IN RELATION TO A REFERENCE BLOCK OF AT LEAST ONE SECOND IMAGE COMPONENT, ENCODING DEVICE AND CORRESPONDING COMPUTER PROGRAM| WO2014001573A1|2012-06-29|2014-01-03|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|Video data stream concept| FR2993084A1|2012-07-09|2014-01-10|France Telecom|VIDEO CODING METHOD BY PREDICTING CURRENT BLOCK PARTITIONING, DECODING METHOD, CODING AND DECODING DEVICES AND CORRESPONDING COMPUTER PROGRAMS| WO2014008951A1|2012-07-13|2014-01-16|Huawei Technologies Co., Ltd.|Apparatus for coding a bit stream representing a three-dimensional video| US9699450B2|2012-10-04|2017-07-04|Qualcomm Incorporated|Inter-view predicted motion vector for 3D video| JP5719401B2|2013-04-02|2015-05-20|日本電信電話株式会社|Block size determination method, video encoding device, and program| US9749627B2|2013-04-08|2017-08-29|Microsoft Technology Licensing, Llc|Control data for motion-constrained tile set| KR102127280B1|2013-04-08|2020-06-26|지이 비디오 컴프레션, 엘엘씨|Inter-component prediction| WO2014166119A1|2013-04-12|2014-10-16|Mediatek Inc.|Stereo compatibility high level syntax| US9716899B2|2013-06-27|2017-07-25|Qualcomm Incorporated|Depth oriented inter-view motion vector prediction| JP6315911B2|2013-07-09|2018-04-25|キヤノン株式会社|Image encoding device, image encoding method and program, image decoding device, image decoding method and program| US20150063455A1|2013-09-02|2015-03-05|Humax Holdings Co., Ltd.|Methods and apparatuses for predicting depth quadtree in three-dimensional video| EP3846469A1|2013-10-18|2021-07-07|GE Video Compression, LLC|Multi-component picture or video coding concept| US10368097B2|2014-01-07|2019-07-30|Nokia Technologies Oy|Apparatus, a method and a computer program product for coding and decoding chroma components of texture pictures for sample prediction of depth pictures| FR3030976B1|2014-12-22|2018-02-02|B<>Com|METHOD FOR ENCODING A DIGITAL IMAGE, DECODING METHOD, DEVICES AND COMPUTER PROGRAMS| CN105872539B|2015-02-08|2020-01-14|同济大学|Image encoding method and apparatus, and image decoding method and apparatus| CN106358042B|2015-07-17|2020-10-09|恩智浦美国有限公司|Parallel decoder using inter-prediction of video images| CN105120295B|2015-08-11|2018-05-18|北京航空航天大学|A kind of HEVC complexity control methods based on quadtree coding segmentation| EP3435673A4|2016-03-24|2019-12-25|Intellectual Discovery Co., Ltd.|Method and apparatus for encoding/decoding video signal| CN109565602A|2016-08-15|2019-04-02|诺基亚技术有限公司|Video coding and decoding| US10609423B2|2016-09-07|2020-03-31|Qualcomm Incorporated|Tree-type coding for video coding| US10110914B1|2016-09-15|2018-10-23|Google Llc|Locally adaptive warped motion compensation in video coding| WO2018061550A1|2016-09-28|2018-04-05|シャープ株式会社|Image decoding device and image coding device| EP3528498A4|2016-11-21|2019-08-21|Panasonic Intellectual Property Corporation of America|Coding device, decoding device, coding method, and decoding method| WO2018131986A1|2017-01-16|2018-07-19|세종대학교 산학협력단|Image encoding/decoding method and device| EP3358754A1|2017-02-02|2018-08-08|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|Antenna array codebook with beamforming coefficients adapted to an arbitrary antenna response of the antenna array| JP6680260B2|2017-04-28|2020-04-15|株式会社Jvcケンウッド|IMAGE ENCODING DEVICE, IMAGE ENCODING METHOD, IMAGE ENCODING PROGRAM, IMAGE DECODING DEVICE, IMAGE DECODING METHOD, AND IMAGE DECODING PROGRAM| KR20190078890A|2017-12-27|2019-07-05|삼성전자주식회사|Method and apparatus for estimating plane based on grids| US11108841B2|2018-06-19|2021-08-31|At&T Intellectual Property I, L.P.|Apparatus, storage medium and method for heterogeneous segmentation of video streaming| US11265579B2|2018-08-01|2022-03-01|Comcast Cable Communications, Llc|Systems, methods, and apparatuses for video processing| CN110505486A|2019-08-23|2019-11-26|绍兴文理学院|A kind of encoding and decoding method of pair of high probability motion vectors mapping| CN110602502A|2019-08-23|2019-12-20|绍兴文理学院|Method for coding and decoding motion vector| WO2021061026A1|2019-09-25|2021-04-01|Huawei Technologies Co., Ltd.|Method and apparatus of simplified geometric merge mode for inter prediction| WO2021195569A1|2020-03-27|2021-09-30|Beijing Dajia Internet Information Technology Co., Ltd.|Methods and devices for prediction dependent residual scaling for video coding|
法律状态:
2019-01-15| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]| 2020-02-04| B15K| Others concerning applications: alteration of classification|Free format text: AS CLASSIFICACOES ANTERIORES ERAM: H04N 7/26 , H04N 7/50 Ipc: H04N 19/103 (2014.01), H04N 19/119 (2014.01), H04N | 2020-02-04| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]| 2020-02-27| B25A| Requested transfer of rights approved|Owner name: GE VIDEO COMPRESSION, LLC (US) | 2021-06-22| B09A| Decision: intention to grant [chapter 9.1 patent gazette]| 2021-08-10| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 13/04/2010, OBSERVADAS AS CONDICOES LEGAIS. PATENTE CONCEDIDA CONFORME ADI 5.529/DF, QUE DETERMINA A ALTERACAO DO PRAZO DE CONCESSAO. |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 PCT/EP2010/054840|WO2011127966A1|2010-04-13|2010-04-13|Inter-plane prediction| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|