![]() moving image encoding method, moving image encoding apparatus, moving image decoding method, moving
专利摘要:
MOVEMENT VIDEO ENCODING METHOD, MOVEMENT VIDEO ENCODING EQUIPMENT, MOVEMENT VIDEO DECODING METHOD, MOVEMENT VIDEO DECODING EQUIPMENT AND MOVEMENT VIDEO ENCODING / DECODING DEVICE. The present invention relates to an image encoding method and an image decoding method capable of improving the efficiency of the encoding. The moving image encoding apparatus (100) includes a fusion block candidate calculation unit (111) that (i) specifies fusion block candidates in fusion mode, using colpic information, such as the vectors of motion and the index values of the reference image of the neighboring blocks of a current block to be encoded and a motion vector and the like of a colocalized block of the current block, which are stored in a colPic memory (112), and (ii ) generates a combined fusion block using fusion block candidates. 公开号:BR112013023478B1 申请号:R112013023478-4 申请日:2012-02-28 公开日:2021-01-19 发明作者:Tashiyasu Sugio;Takahiro Nishi;Youji Shibahara;Hisao Sasai 申请人:Sun Patent Trust; IPC主号:
专利说明:
Technical Field The present invention relates to motion picture encoding methods for encoding input image block by block using interimage prediction with reference to the encoded image (s), and image decoding methods in motion to decode a bit stream block by block using interim prediction. Background of the Technique In moving image encoding, generally, a quantity of information is compressed using a redundancy of a spatial direction and a temporal direction of moving images. In this case, in general, one of the methods that uses redundancy in a spatial direction is transformation to a frequency domain, and one of the methods that uses redundancy in a temporal direction is the interimage prediction encoding (later referred to as " interprevision "). When interpreting encoding, when a current image is to be encoded, the image encoded before or after the current image in the display order is used as a reference image. Then, the motion estimate is performed on the current image that corresponds to the reference image for estimating a motion vector. Then, the difference between the forecast image data, generated by motion compensation based on the estimated motion vector and the image data from the current image is obtained to remove redundancy in a temporal direction. In this case, in the motion estimate, a difference value between the current block in a current image and a block in the reference image is calculated, and a block that has the smallest difference in the reference image is terminated as a block. of reference. Then, using the current block and the reference block, a motion vector is estimated. In the moving image encoding scheme known as H.264 which has already been standardized, in order to compress a quantity of information, three types of image from image I, image P and image B are used. Image I is an image in which inter-prediction coding is not performed, in other words, in which intra-image prediction coding (later referred to as "intra-forecasting") is performed. The P image is an image in which interpretation coding is performed with reference to an encoded image, located before or after the current image in the display order. Image B is an image in which interpreting encoding is performed with reference to two encoded images, located before or after the current image in the display order. When interpreting encoding, a reference image list to specify a reference image is generated. The reference image list is a list in which the coded reference image to be referred to in the interprevision is designated with corresponding value (s) of a reference image index. For example, since image B can be encoded with reference to the two images, image B has two reference image lists (L0, L1). FIG. 1A is a diagram to explain the assignment of the reference image indexes for each of the reference images. FIGS. 1B and 1C show an example of a pair of reference image lists for image B. In FIG. 1A, for example, a reference image 2, a reference image 1, a reference image 0, and a current image to be encoded are considered to be arranged in the display order. Under the assumption, the reference image list 0 (L0) is an example of a reference image list in the forecast direction 0 (the first forecast direction) for bidirectional forecasting. As shown in FIG. 1B, a "0" value of a reference image index 0 is assigned to the reference image 0 arranged in the display order 2, a "1" value of the reference image index 0 is assigned to the image of reference 1 arranged in the display order 1, and a value "2" of the reference image index 0 is assigned to the reference image 2 arranged in the display order 0. In short, a higher value of the reference image index is assigned to an image temporarily closer to the current image in the display order. On the other hand, reference image list 1 (L1) is an example of a reference image list in forecast direction 1 (the second forecast direction) for bidirectional forecasting. In reference image list 1 (L1), a value "0" of a reference image index 1 is assigned to reference image 1 arranged in display order 1, a value "1" of the reference image index 1 is assigned to reference image 0 arranged in display order 2, and a value "2" of reference image index 1 is assigned to reference image 2 arranged in display order 0. As described above, for each one of the reference images, it is possible to assign different reference image indexes to the respective forecast directions (the reference images 0 and 1 in FIG. 1A), or to assign the same reference image index to both forecast directions (reference image 2 in FIG. 1A). In addition, in the scheme of the moving image encoding method known as H.264 (see Non-Patent Literature 1), as an interpretation coding mode for each current block in image B, there is a vector estimation mode of motion to encode (a) a difference value between the forecast image data and the image data of a current block and (b) a motion vector used in the generation of the forecast image data. In motion vector estimation mode, bidirectional forecasting or unidirectional forecasting is selected. In bidirectional forecasting, the forecast image is generated with reference to the two encoded images, located before or after the current image. On the other hand, in unidirectional forecasting, the forecast image is generated with reference to an encoded image located before or after the current image. In addition, in the moving image encoding scheme known as H.264, in the encoding of image B, when motion vectors are to be derived, it is possible to select an encoding mode called a motion vector mode. of time forecast. The method of coding interprevision in the vector mode of the motion of time prediction is described with reference to FIG. 2. FIG. 2 is an explanatory diagram showing motion vectors in the vector mode of the weather forecast motion and shows the situation where a block "a" in a B2 image is encoded in the vector mode of the weather forecast motion. In this situation, a vb motion vector is used. The motion vector vb was used to encode a "b" block in a P3 image that is a localized reference image subsequent to image B2. Block "b" (later referred to as a "colocalized block") is located, in image P3, in a position that corresponds to the position of block "a". The motion vector vb is a motion vector that was used to code the block "b", and refers to a P1 image. Using a motion vector in parallel to the motion vector vb, the "a" block obtains reference blocks from the P1 image which is a forwarded reference image and from the P3 image which is an inverse reference image. In this way, bidirectional prediction is performed to code the "a" block. More specifically, the motion vectors used to code the "a" block are a motion vector va1 referring to the PI image and a motion vector va2 referring to the P3 image. Citation Listing Patent Literature NPL-1: ITU-T Recommendation H. 264, "Advanced Coding of Video for general audiovisual services ", March 2010. Summary of the Invention Technical problem However, in a conventional way, there is a situation where, when coding a current block, the selection of the bidirectional forecast or the unidirectional forecast causes the reduction of the coding efficiency. An exemplary and non-limiting embodiment of the present description provides a method of encoding motion picture and a method of decoding motion picture, which are capable of improving the efficiency of encoding. Problem Solution In general, the techniques described here characterize a moving image encoding method for encoding a current block that copies at least one index value from the reference image and at least one motion vector, at least one value from reference image index being used to identify a reference image that was used in encoding a block other than the current block, the moving image encoding method including: determining a plurality of first candidate blocks from which at least one index value of the reference image and at least one motion vector must be copied; generate a second candidate block using bidirectional prediction, the second candidate block being generated by combining the index values of the reference image and the motion vectors, which were used for at least part of the first candidate blocks; select, from the first candidate blocks and the second candidate block, a block from which at least one index value of the reference image and at least one motion vector must be copied to encode the current block; and copy at least one index value from the reference image and at least one motion vector from the selected block, and encode the current block using at least one copied index value from the reference image and at least one copied vector from motion. In this way, it is possible to encode the current image using motion vector (s) and reference image (s), which are the most appropriate for the current block. As a result, the efficiency of the coding can be improved. It is also possible that the generation of the second candidate block includes: determining whether or not each of the first candidate blocks has one or more index values of the reference image and one or more motion vectors; and generate the second candidate block, when at least one of the first candidate blocks has no index value from the reference image and no motion vector. It is also possible that the moving image encoding method still includes: determining whether the current block should be encoded or not using at least one index value of the reference image and at least one motion vector, which are copied from one the first candidate blocks or the second candidate block; define a flag indicating a result of the determination; and add the flag to a bit stream that includes the current block. It is also possible that the moving image encoding method still includes: determining an index value of the block that corresponds to the selected block from which at least one index value of the reference image and at least one motion vector must be copied to encode the current block, from a candidate list in which the first candidate blocks and the second candidate block are designated with their respective block index values; and add the index value of the given block to a bit stream that includes the current block. It is also possible that the generation of the second candidate block includes: determining whether or not two of the first candidate blocks have reference image index values indicating different forecast directions and whether they were encoded using bidirectional forecasting; and generate the second candidate block, when the two of the first candidate blocks have different forecast directions or have been coded using bidirectional forecasting. It is also possible that the generation of the second candidate block still includes: determining whether one of two of the first candidate blocks was or was not predicted in a first forecast direction or whether they were coded using bidirectional forecasting, and whether the other of two of the first blocks candidates was predicted in a second forecasting direction or whether it was coded through bidirectional forecasting; and when it is determined that one of two of the first candidate blocks was predicted in the first forecast direction or encoded through the bidirectional forecast, and the other of two of the first candidate blocks was predicted in the second forecast direction or that it was coded through the bidirectional forecast , generate the second candidate block (i) by selecting an index value from the reference image and a motion vector, which were used in the first forecasting direction for one of two of the first candidate blocks, as an index value from the image of reference and a motion vector, which are used in the first forecasting direction for the second candidate block, and (ii) selecting an index value from the reference image and a motion vector, which were used in the second forecasting direction for the other of two of the first candidate blocks, as an index value of the reference image and a motion vector, which are used in the second forecast direction p for the second candidate block. It is also possible that the generation of the second candidate block still includes: determining whether one of two of the first candidate blocks was or was not foreseen in a first forecast direction or coded through the bidirectional forecast, and if the other of two of the two first candidate blocks was predicted in a second forecast direction or if it was coded using bidirectional forecasting; and when it is not determined that one of two of the first candidate blocks was predicted in the first forecasting direction or coded through bidirectional forecasting, and the other of two of the first candidate blocks was predicted in the second forecasting direction or coded through bidirectional forecasting, generate the second candidate block (i) by selecting an index value of the reference image and a motion vector, which were used in the first forecasting direction for the other of two of the first candidate blocks, as an index value of the image of reference and a motion vector, which are used in the first forecasting direction for the second candidate block, and (ii) selecting an index value from the reference image and a motion vector, which were used in the second direction prediction for one of two of the first candidate blocks, such as an index value of the reference image and a motion vector, which are used in the second forecast direction for the second candidate block. In another aspect, the techniques described here characterize a moving image decoding method for decoding a current block that copies at least one index value from the reference image and at least one motion vector, at least one index value. of the reference image being used to identify a reference image that was used in decoding a block other than the current block, the moving image decoding method which includes: determining a plurality of first candidate blocks from which at least one index value of the reference image and at least one motion vector must be copied; generate a second candidate block using bidirectional prediction, the second candidate block being generated by combining the index values of the reference image and the motion vectors, which were used for at least part of the first candidate blocks; select, from the first candidate blocks and the second candidate block, a block from which at least one index value of the reference image and at least one motion vector must be copied to decode the current block; and copy at least one index value from the reference image and at least one motion vector from the selected block, and decode the current block using at least one copied index value from the reference image and at least one copied vector of motion. In this way, it is possible to decode an encoded bit stream using the most appropriate motion vector (s) and the most appropriate reference image (s). It is also possible that the generation of the second candidate block includes: determining whether or not each of the first candidate blocks has an index value of the reference image and a motion vector; and generate the second candidate block, when at least one of the first candidate blocks has no index value from the reference image and no motion vector. It is also possible that the moving image decoding method still includes: obtaining, from a bit stream that includes the current block, a flag indicating whether or not the current block should be decoded using at least one value of index of the reference image and at least one motion vector, which are copied from one of the first candidate blocks or the second candidate block; and decode the current block according to the flag. It is also possible that the moving image decoding method still includes: obtaining a block index value from a bit stream that includes the current block; and select, using the index value of the obtained block, a block from which at least one index value of the reference image and at least one motion vector must be copied to decode the current block, from a list of candidate in which the first candidate blocks and the second candidate block are designated with their respective block index values. It is also possible that the generation of the second candidate block includes: determining whether or not two of the first candidate blocks have reference image index values indicating different forecast directions and whether they have been encoded using bidirectional forecasting; and generate the second candidate block, when the two of the first candidate blocks have different forecast directions or have been coded using bidirectional forecasting. It is also possible that the generation of the second candidate block still includes: determining whether one of two of the first candidate blocks was or was not predicted in a first forecast direction or whether it was encoded using bidirectional forecasting, and whether the other of two of the first candidate blocks was predicted in a second forecasting direction or if it was coded using bidirectional forecasting; and when it is determined that one of two of the first candidate blocks was predicted in the first forecast direction or coded through bidirectional forecasting, and the other of two of the first candidate blocks was predicted in the second forecast direction or coded through bidirectional forecasting, generate the second candidate block (i) selecting an index value of the reference image and a motion vector, which were used in the first forecasting direction for one of two of the first candidate blocks, as an index value of the reference image and a motion vector, which are used in the first forecasting direction for the second candidate block, and (ii) selecting an index value from the reference image and a motion vector, which were used in the second forecasting direction for the another of two of the first candidate blocks, as an index value of the reference image and a motion vector, which are used in the second forecast direction for the se second candidate block. It is also possible that the generation of the second candidate block still includes: determining whether one of two of the first candidate blocks was or was not predicted in a first forecast direction or whether it was coded using bidirectional forecasting, and the other of two of the first blocks candidates was predicted in a second forecasting direction or whether it was coded through bidirectional forecasting; and when it is not determined that one of two of the first candidate blocks was predicted in the first forecast direction or was coded through the bidirectional forecast, and the other of two of the first candidate blocks was predicted in the second forecast direction or coded through the bidirectional forecast , generate the second candidate block (i) by selecting an index value from the reference image and a motion vector, which were used in the first forecasting direction for the other of two of the first candidate blocks, as an image index value reference and a motion vector, which are used in the first forecasting direction for the second candidate block, and (ii) selecting an index value from the reference image and a motion vector, which were used in the second direction of forecast for one of two of the first candidate blocks, such as an index value of the reference image and a motion vector, which are used in the second forecast direction p for the second candidate block. It should be noted that the present invention can be implemented not only as the moving image encoding method and the moving image decoding method described above, but also as: a moving image encoding device, a moving image encoding device moving image decoding, and a moving image encoding and decoding apparatus, which include processing units that perform characterized steps included in the moving image encoding method and the moving image decoding method; a program that makes a computer perform the steps; and the like. The present invention can also be implemented as: a computer-readable registration medium, such as a Compact Disc-Read-Only Memory (CD-ROM), in which the above program is registered; information, data, signals indicating the program; and the like. The program, information, data, or signals can be distributed via a transmission medium such as the Internet. Advantageous Effects of the Invention In accordance with the present invention, a new fusion block candidate from bidirectional prediction is calculated from candidate fusion blocks in order to improve the coding efficiency. Brief Description of the Drawings FIG. 1A is a diagram to explain the assignment of the reference image indexes for each of the reference images. FIG. 1B is a table showing an example of one of the reference image lists for image B. FIG. 1C is a table showing an example of the other reference image list for image B. FIG. 2 is an exemplary diagram showing motion vectors in the vector mode of the weather forecast motion. FIG. 3A is a diagram showing a relationship between: a current block to be coded; neighboring blocks; and the motion vectors of the neighboring blocks. FIG. 3B is a table showing an example of a fusion block candidate list in which each value of a fusion index is assigned to a motion vector and a reference image index, which are to be used in the mode of fusion. FIG. 4 is a block diagram showing the structure of a motion picture encoding apparatus using a motion picture encoding method according to an embodiment of the present description. FIG. 5 is a flow chart of the summary of a processing flow of the motion picture encoding method according to the embodiment of the present description. FIG. 6 is a table showing an example of a fusion block candidate list in which each value of a fusion index is de-assigned to a motion vector and a reference image index, which are to be used in the mode melting method according to embodiment 1. FIG. 7 is an example of a coding table which is used to perform variable length coding on the fusion block index. FIG. 8 is a flow chart of a detailed processing flow for calculating a combined fusion block. FIG. 9 is a flow chart of a detailed processing flow for comparing forecast errors. FIG. 10 is a block diagram showing a structure of a moving image decoding apparatus using a moving image decoding method according to an embodiment of the present description. FIG. 11 is a summary flow chart of a processing flow of a motion picture decoding method according to an embodiment of the present description. FIG. 12 shows a global configuration of a content provision system for the implementation of content distribution services; FIG. 13 shows a global configuration of a digital transmission system; FIG. 14 shows a block diagram illustrating an example of a television configuration; FIG. 15 shows a block diagram illustrating an example of setting up an information recording / reproducing unit that reads and records information from and on a recording medium that is an optical disc; FIG. 16 shows an example of configuring a recording medium that is an optical disc; FIG. 17A shows an example of a cell phone; FIG. 17B is a block diagram showing an example of a cell phone configuration; FIG. 18 illustrates a multiplexed data structure; FIG. 19 shows schematically how each stream is multiplexed into multiplexed data; FIG. 20 shows how a video stream is stored in a PES packet stream in more detail; FIG. 21 shows a structure of TS Packets and source packets in the multiplexed data; FIG. 22 shows a PMT data structure; FIG. 23 shows an internal structure of information about the multiplexed data; FIG. 24 shows an internal structure of flow attribute information; FIG. 25 shows the steps to identify the video data; FIG. 26 shows an example of the configuration of an integrated circuit for the implantation of the moving image coding method and the moving image decoding method according to each of the modalities; FIG. 27 shows the configuration for switching between the drive frequencies; FIG. 28 shows steps for identifying video data and switching between trigger frequencies; FIG. 29 shows an example of a look-up table in which the video data patterns are associated with the triggering frequencies; FIG. 30A is a diagram showing an example of a configuration for sharing a module of a signal processing unit; and FIG. 30B is a diagram showing another example of configuration for sharing a module of the signal processing unit. Description of Modalities In the moving image encoding scheme, an encoding mode called the fusion mode was examined as an interpretation mode for each block to be encoded in image B or image P. In this fusion mode, a motion vector and a value from a reference image index (later referred to as "reference image index values") are copied from a neighboring block of a current block to be encoded, to encode the current block. In this case, adding the index and similar values of the neighboring block, from which they are copied, are added to a bit stream. As a result, a motion vector or reference image index value, or which was used in encoding, can be selected during decoding. A detailed example is described with reference to the corresponding figures. FIG. 3A is a diagram showing a relationship between: a current block to be coded; neighboring blocks; and the motion vectors of the neighboring blocks. FIG. 3B is a table showing an example of a fusion block candidate list in which each value of a fusion index is assigned to a motion vector and a reference image index, which are to be used in fusion mode . In FIG. 3A, a coded block on the side immediately to the left of the current block is referred to as a neighboring block A, a coded block immediately above the current block is referred to as a neighboring block B, a coded block on the immediately upper right side of the current block is referred to as a neighboring block C, and a coded block on the immediately lower left side of the current block is referred to as a neighboring block D. Furthermore, in FIG. 3A, neighboring block A has been encoded through unidirectional forecasting using forecasting direction 0 (the first forecasting direction). Neighboring block A has a motion vector MvL0_A of the forecast direction 0 for a reference image indicated by an index value Re-fL0_A in a reference image index of the forecast direction 0. In this case, the vector of motion MvL0 is a motion vector that refers to a reference image specified by reference image list 0 (L0), and MvL1 is a motion vector that refers to a reference image specified by reference image list reference 1 (L1). Neighboring block B was encoded using unidirectional forecasting using forecast direction 1 (the second forecast direction). Neighboring block B has a motion vector MvL1_B of forecast direction 1 for a reference image indicated by an index value RefL1_B in a reference image index of forecast direction 1. Neighboring block C has been encoded via the intranet. forecast. Neighboring block D was encoded by unidirectional forecasting using forecasting direction 0. Neighboring block D has a motion vector MvL0_D from forecasting direction 0 for a reference image indicated by an index value RefL0_D in the image index direction of reference 0. In the situation as shown in FIG. 3A, such as a motion vector and an index value of the reference image for the current block, a motion vector and an index value of the reference image, which offer the greatest coding efficiency are selected, for example, the starting from one or more motion vectors and the reference image index values of neighboring blocks A, B, C, and D, and (b) a motion vector and a reference value of the colocalized block reference image, which are obtained in the vector mode of the motion of time forecasting. Then, a fusion block index indicating the selected neighboring block or the colocalized block is added to the bit stream. For example, if neighboring block A is selected, the current block is encoded using the motion vector M- vL0_A and the index value of the reference image ReL0_A of the forecast direction 0, and only a value "0" of melt block index indicating that neighboring block A is used as shown in FIG. 3B is added to the bit stream, so that an amount of information from the motion vectors and the index values of the reference image can be reduced. However, in the fusion mode described above, if a block to be a fusion block candidate has no motion vector and no index value from the reference image because the block was encoded through intraprevision (like the neighboring block C ), the block cannot be used as a merger block candidate. In the above situation, it is also considered that the number of available fusion block candidates is reduced, the selection range for a motion vector and an index value of the reference image, which offer the greatest coding efficiency is reduced , and eventually the efficiency of the coding is decreased. In order to solve the above problem, an exemplary and non-limiting modality provides an image encoding method and an image decoding method, which are able to improve the coding efficiency without decreasing the number of available fusion block candidates. in fusion mode. In the following, modalities according to the present invention are described with reference to the drawings. It should be noted that all of the modalities described below are specific examples of the present description. Numerical values, formats, materials, constituent elements, arrangement positions and connection configuration of the constituent elements, stages, the order of stages and the like described in the modalities below are all merely exemplary and are not intended to limit the present description. The present description is characterized only by the appended claims. Therefore, among the constituent elements in the modalities below, those that are not described in the independent claims that show the more generic concept of the present description are described as elements that constitute the most desirable configurations, although such constituent elements do not necessarily need to achieve the objective of this description. Mode 1 FIG. 4 is a block diagram showing a structure of a moving image encoding apparatus using a moving image encoding method according to mode 1. As shown in FIG. 4, the moving image encoding apparatus 100 includes an orthogonal transformation unit 101, a quantization unit 102, an inverse quantization unit 103, an inverse orthogonal transformation unit 104, a block memory 105, a frame memory 106, an intraprevision unit 107, an interpreter unit 108, an interpreter control unit 109, an image type determination unit 110, a fusion block candidate calculation unit 111, a colPic memory 112, a variable length coding unit 113, a subtractor 114, an adder 115, and a switching unit 116. The orthogonal transformation unit 101 transforms the forecast error data which is a difference between the forecast data generated as described below and an input image sequence, from an image domain to a frequency domain. The quantization unit 102 quantifies the prediction error data that has been transformed into the frequency domain. The inverse quantization unit 103 inversely quantifies the prediction error data that was quantized by the quantization unit 102. The inverse orthogonal transformation unit 104 transforms the inverse quantized forecast error data from a frequency domain to a frequency domain. Image. Adder 115 adds the forecast data to the inversely quantified forecast error data to generate the decoded data. Block 105 memory maintains the decoded image block by block. Frame memory 106 maintains the decoded image, image by image. The image type determination unit 110 determines by which type of image among an I image, a B image or a P image, each image in the input image sequence must be encoded, and generates information about the type of image . The intraprevention unit 107 encodes a current block to be encoded through the intraprevision, using the decoded image stores a block by block in the memory of block 105, to generate a forecast image. Interpretation unit 108 encodes the current block through the interpretation using the decoded image stored image by image in frame memory 106 and a motion vector derived in the motion estimate to generate the forecast image. Subtractor 114 subtracts the forecast data generated by the intra-forecast unit 206 or the interpreter unit 207 from the input image sequence in order to calculate the forecast error data. The fusion block candidate calculation unit 111 specifies the fusion block candidates (the first candidate blocks) in the fusion mode, using (a) the motion vectors and the reference image index values, which have been used to encode neighboring blocks and (b) colPic information, such as a motion vector and similars to the colocalized block which is stored in colPic 112 memory, referring to the current block. In this case, merger block candidates are candidates for a block from which at least one motion vector and at least one index value of the reference image are directly used (copied) for the current block. In addition, the fusion block candidate calculation unit 111 generates a combined fusion block (the second candidate block) using the method described below. It should be noted that the combined fusion block is not a block that actually has pixel values, but a virtual block that has motion vectors and index values from the reference image. In addition, the unit of calculation of the melting block candidate 111 assigns to each of the specified melting blocks a corresponding value of the melting block index (block index). Then, the fusion block candidate calculation unit 111 provides the fusion block candidates and the fusion block index values (later also referred to as "fusion block index values") for the fusion block index unit. interprevision control 109. It should be noted in the present modality 1 that the motion vectors and the index values of the reference image used for the neighboring blocks of the current image are considered to be stored in the candidate fusion block calculation unit. 111. Interpretation control unit 109 performs interpreter encoding in the prediction mode that has the smallest prediction error between (a) the prediction mode for an interpreter image generated using a motion vector derived through the estimation mode of motion and (b) the prediction mode for an interprevision image generated using a motion vector derived in fusion mode. In addition, interpreter control unit 109 provides variable length coding unit 113 with (a) a fusion flag indicating whether the forecast mode is fusion mode or not, (b) a block index value melting block that corresponds to the determined melting block if the melting mode is selected as the forecast mode, and (c) the information about the forecast error. In addition, interpreter control unit 109 transfers colPic information, which includes the motion vector and the like from the current block, to colPic memory 112. The variable length encoding unit 113 performs variable length encoding on the predicted error quantified data, on the fusion flag, on the fusion block index value and on the type of image information to generate a bit stream. FIG. 5 is a flow chart of the summary of a processing flow of the motion picture encoding method according to the present embodiment. The fusion block candidate calculation unit 111 specifies the fusion block candidates from the neighboring blocks and a colocalized block from a current block to be encoded (Step S11). For example, in the situation shown in FIG. 3A, the fusion block candidate calculation unit 111 specifies neighboring blocks A, B, C, D, and a colocalized fusion block, as the fusion block candidates. In this case, the colocalized fusion block includes at least one motion vector and the like, which are calculated in the time prediction mode from at least one motion vector from the colocalized block. Then, the fusion block candidate calculation unit 111 assigns each fusion block candidate a corresponding value of the fusion block index as shown in FIG. 3B. In general, as a fusion block index value is lower, the amount of information needed is decreased. On the other hand, as a fusion block index value is higher, the amount of information needed is increased. Therefore, if a fusion block index value, which corresponds to a fusion block candidate that has a high chance of having a more accurate motion vector and a more accurate reference image index value, is decreased, the coding efficiency is increased. For example, it can be considered that the number of times each fusion block candidate has been selected as a fusion block is counted and a smaller fusion block index value is assigned to a block that has the highest count. In this case, if a target fusion block candidate does not maintain information such as a motion vector, for example, if the fusion block candidate is a block encoded through intraprevision or if the fusion block candidate is located outside of an image boundary or part of the boundary, it is considered that such a block cannot be used as a merger block candidate. In the present embodiment, if a block cannot be used as a fusion block candidate, the block is referred to as an unavailable block, and if a block can be used as a fusion block candidate, the block is referred to as a available block. In the situation shown in FIG. 3A, since neighboring block C is a coded block through intra-prediction, neighboring block C is considered to be unavailable, as an unavailable block as a fusion block candidate. Using the fusion block candidates specified in step S11, the fusion block candidate calculation unit 111 generates a combined fusion block using the method as described later, to update the fusion block candidate list (Step S12) . For example, the candidate list of the fusion block shown in FIG. 6 is generated from the candidate list of the fusion block, shown in FIG. 3B. In the candidate list of the fusion block in FIG. 3B, the combined fusion block generated using the method described below is used instead of an unavailable candidate that has a fusion block index value of "3". Using such a newly generated combined fusion block instead of the unavailable candidate, it is possible to improve the efficiency of the coding without changing the maximum value of the number of fusion block candidates. Then, the interpretation control unit 109 compares (a) a forecast error of the generated interpretation image using the motion vector derived through the motion estimate with (b) a forecast error of the forecast image generated by the merger block candidate using the method described later, to determine the prediction mode for encoding the current block. In this case, if it is determined that the prediction mode is the fusion mode, then the intervening control unit 109 determines a fusion block index value indicating which fusion block candidate should be used. Then, if the forecast mode is the fusion mode, then the interpreter control unit 109 sets the fusion flag to 1, or else sets the fusion flag to 0 (Step S13). Interpretation control unit 109 determines whether the fusion flag is 1 or not, in other words, whether the forecast mode is the fusion mode or not (Step S14). As a result, if the forecast mode is the fusion mode (Yes in Step S14), then the interpreter control unit 109 provides the variable length coding unit 113 with the fusion flag and the index value of the fusion block to be used in fusion, to add the fusion flag and the index value in a bit stream (Step S15). On the other hand, if the prediction mode is not the fusion mode (Not in Step S14), then the interpreter control unit 109 provides the variable length coding unit 113 with the fusion flag and information about the vector mode of motion estimate, to add the fusion flag and the information in the bit stream (Step S16). It should be noted in the present embodiment that, as shown in FIG. 3B, in relation to the index values of the fusion block, a value that corresponds to neighboring block A is "0", a value that corresponds to neighboring block B is "1", a value that corresponds to the colocalized fusion block is " 2 ", a value that corresponds to neighboring block C is" 3 "and a value that corresponds to neighboring block D is" 4 ". However, the way of assigning fusion block index values is not limited to just the example cited. For example, it is also possible that the highest value is assigned to an unavailable candidate as a merger block candidate. It should also be noted that merger block candidates are not limited to neighboring blocks A, B, C, and D. For example, a neighboring or similar block that is located above the block immediately to the lower left D can be selected as a merger block candidate. It should also be noted that it is not necessary to use all neighboring blocks, and only neighboring blocks A and B can be used as the fusion block candidates. It should also be noted that it is not always necessary to use the colocalized fusion block. It should also be noted that it was described in the present embodiment of step S15 in FIG. 5 that the interpreter control unit 109 provides a fusion block index value for the variable length encoding unit 113 to add the fusion block index value to the bit stream, but it is also possible not to add the value fusion block index if the number of fusion block candidates is 1. In this way, it is possible to reduce the amount of fusion block index information. It should also be noted that it was described in the present embodiment of step S12 in FIG. 5 that a combined fusion block is used instead of an unavailable candidate that has a fusion block index value of "3". However, the present invention is not limited to the above and the combined fusion block can also be added to the candidate list of the fusion block. In this way, it is possible to increase the selection range of merger block candidates. In this case, it is also possible that the unavailable candidate will be treated as a candidate who has the motion vector 0 and the reference image index 0. FIG. 7 shows an example of a coding table which is used to perform variable length coding on the index values of the fusion block. In the example shown in FIG. 7, a code that has a small extension is assigned to a smaller fusion block index value. Therefore, if a fusion block index value that corresponds to a fusion block candidate that has a high accuracy predictability is decreased, it is possible to improve the coding efficiency. It should be noted that it has been described in the present embodiment that variable length coding is performed on the index values of the fusion block as shown in FIG. 7, but the fusion block index values can be encoded with a fixed code extension. In this way, it is possible to reduce the burden on encoding or decoding processing. FIG. 8 is a flow chart of a detailed flow of step S12 in FIG. 5. Next, the method for generating a combined fusion block from the fusion block candidates specified in step S11 with reference to FIG. 8, is described. The fusion block candidate calculation unit 111 initializes an index value 1 (idx1) to "0" (Step S21). Then, the melting block candidate's calculation unit 111 initializes an index value 2 (idx2) to "0" (Step S22). The fusion block candidate calculation unit 111 determines whether or not idx1 and idx2 have different values and whether the fusion block candidate list includes any unavailable candidates (Step S23). As a result, if there is a candidate not available (Yes in Step S23), then the fusion block candidate calculation unit 111 determines whether the fusion block candidate [idx1] designated with the index value of the fusion block fusion idx1 is available or not and whether the fusion block candidate [idx2] designated with the index value of the fusion block idx2 is available (Step S24). As a result, if the fusion block candidate [idx1] is available and the fusion block candidate [idx2] is also available (Yes in Step S24), then the fusion block candidate calculation unit 111 determines whether the merger block candidate [idx1] and the merger block candidate [idx2] were predicted or not in the different forecast directions or whether both the merger block candidate [idx1] and the merger block candidate [idx2 ] were encoded using bidirectional forecasting (Step S25). As a result, if the merger block candidate [idx1] and the merger block candidate [idx2] have been forecast in the different forecast directions or if both the merger block candidate [idx1] and the merger block candidate fusion [idx2] have been encoded using bidirectional forecasting (Yes in Step S25), then the fusion block candidate calculation unit 111 determines whether or not the fusion block candidate [idx1] was predicted in the forecasting direction 0 (the first forecasting direction) or whether it was coded using bidirectional forecasting, and whether the merger block candidate [idx2] was forecasting in forecasting direction 1 (the second forecasting direction) or coded using bidirectional forecasting ( Step S26). As a result, if the merger block candidate [idx1] was forecast in forecast direction 0 or if it was coded using bidirectional forecast, and the merger block candidate [idx2] was forecast in forecast direction 1 or if it was encoded using bidirectional forecasting (Yes in Step S26), in other words, if the merger block candidate [idx1] has at least one motion vector from forecast direction 0 and the merger block candidate [idx2] has at least one motion vector of forecast direction 1, then the fusion block candidate calculation unit 111 selects the motion vector and index value from the reference image 0 of forecast direction of the block candidate fusion [idx1] for the forecast direction 0 of the combined fusion block (Step S27). In addition, the fusion block candidate calculation unit 111 selects the motion vector and the index value of the fusion block candidate's forecast direction 1 reference image [idx2] for the forecast direction 1 of the block combined fusion block, to generate the combined fusion block of the bidirectional forecast (Step S28). On the other hand, if it is not determined that the merger block candidate [idx1] was predicted in the forecast direction 0 or that it was coded using bidirectional forecast, and that the merger block candidate [idx2] was predicted in the direction prediction 1 or that it was encoded using bidirectional forecasting (Not in Step S26), then the fusion block candidate calculation unit 111 selects the motion vector and the index value of the direction reference image forecast 0 of the merger block candidate [idx2] for forecast direction 0 of the combined merger block (Step S29). In addition, the fusion block candidate calculation unit 111 selects the motion vector and the index value of the fusion block candidate's forecast direction 1 reference image [idx1] for the forecast direction 1 of the block combined fusion block, to generate the combined fusion block of the bidirectional forecast (Step S30). The fusion block candidate calculation unit 111 adds the combined fusion block generated to the fusion block candidate list as an available candidate, rather than the unavailable candidate (Step S31). Then, the fusion block candidate calculation unit 111 adds a value of "1" to the idx2 value (Step S32) and determines whether or not the idx2 value is equal to or greater than the maximum number of block candidates melting point (Step S33). As a result, if the idx2 value is not equal to or greater than the maximum number of merger block candidates (Not in Step S33), processing goes back to Step S23 then the block candidate calculation unit Fusion 111 checks again whether any candidate remains unavailable or not, and generates a next combined fusion block (Steps S23 to S32). On the other hand, if the idx2 value is equal to or greater than the maximum number of fusion block candidates (Yes in Step S33), then the fusion block candidate calculation unit 111 adds a value of " 1 "for idx1 (Step S34) and determines whether or not idx1 is equal to or greater than the maximum number of merger block candidates (Step S35). As a result, if idx1 is equal to or greater than the maximum number of fusion block candidates (Yes in Step S35), in other words, if each combination of fusion block candidates has been examined, processing is finished. It should be noted that it has been described in the present embodiment that processing is completed after each combination of the fusion block candidates has been examined, but the present invention is not limited to what has been said above. For example, it is possible to finish the processing when there are no more candidates unavailable in the fusion block's candidate list. As a result, the amount of processing can be reduced. It should also be noted that it has been described in the present embodiment that the steps of the method for generating a combined fusion block from the fusion block candidates are carried out in the order shown in the flowchart of FIG. 8, but the present invention is not limited to what has been said above and the order of the steps can be changed. It should also be noted that it has been described in the present modality that, for example, when a motion vector and an index value of the reference image of the forecast direction referring to a neighboring block are selected for the forecast direction 0 of the fusion block combined, if there is a plurality of merger block candidates that have a motion vector and a reference value of the forecast image reference image 0, the motion vector and the index value of the forecast image reference image 0, which refer to the fusion block candidate that has the fusion block index value closest to "0" are selected. However, the present invention is not limited to what has been said above. For example, it is also possible to select a motion vector and an index value from the reference image of the forecast direction 0, which refer to a fusion block candidate that has a fusion block index value closest to a maximum value. It should also be noted that it was described in the present embodiment at step S31 in FIG. 8 that the combined fusion block generated is added to the candidate list of the fusion block as an available candidate instead of an unavailable candidate, but the present invention is not limited to what has been said above. For example, it is also possible to check whether any other fusion block candidate that has the same motion vector and the same reference image index value as those in the combined fusion block has been included in the candidate list or not. of the merger block, and if such a candidate does not exist in the list, the combined merger block is added to the merger block candidate list as an available candidate instead of an unavailable candidate. In this way, by preventing the same merger block candidate from being added again, it is possible to add effective merger block candidates. As a result, the efficiency of the coding can be improved. It should also be noted that it has been described in the present embodiment that the generated combined fusion block is added to the fusion block candidate list when there is a candidate not available on the fusion block candidate list, but the present invention is not limited to that was said above. For example, it is also possible at step S23 of FIG. 8 that the determination as to whether or not a candidate is not available on the merger block candidate list is not made and that the combined merger block is calculated and newly added to the merger block candidate list. In this way, it is possible to increase the selection range of merger block candidates. As a result, the efficiency of the coding can be improved. A It should also be noted that it has been described in the present embodiment that the combined fusion block generated is added to the candidate list of the fusion block when there is a candidate not available in the candidate list of the fusion block, but the present invention is not limited to what was said above. For example, it is also possible at step S23 of FIG. 8 that the determination as to whether or not a candidate is not available on the merger block candidate list is not made and that the combined merger block is calculated and newly added to the merger block candidate list. In this way, it is possible to increase the selection range of merger block candidates. As a result, the efficiency of the coding can be improved. FIG. 9 is a flow chart of a detailed flow of step S13 through FIG. 5. Hereinafter this is described with reference to FIG. 9. Interpretation control unit 109 sets an index fusion block candidate value to "0", the minimum forecast error for a forecast error (cost) of the motion vector estimate mode, and the fusion flag for "0" (Step S41). In this case, the cost is calculated, for example, using the following Equation 1 of the R-D optimization model. Math. 1 Cost = D + À x R (Equation 1) In Equation 1, D represents a distortion of the encoding which is, for example, the sum of absolute values of the difference of (a) a pixel value obtained through the encoding and decoding of a current block using the forecast image generated by a given motion vector and (b) an original pixel value for the current block. In addition, R represents an amount of coding which is, for example, an amount of coding required to encode the motion vector used in generating the preview image. À represents a Lagrange method of indeterminate multipliers. Then, interpreter control unit 109 determines whether an index merge block candidate value is less or less than the number of merger block candidates in the current block, in other words, whether or not there are any blocks that has a chance of being a fusion candidate (Step S42). As a result, if it is determined that the fusion block candidate value index is less than the number of fusion block candidates in the current block (Yes in Step S42), then Interpreter Control Unit 109 calculates a cost of the merger block candidate designated with the index of the merger block candidate value (Step S43). Then, the Interpretation Control Unit 109 determines whether or not the calculated cost of the fusion block candidate is less than the minimum forecast error (Step S44). As a result, if the calculated cost of the merger block candidate is less than the minimum forecast error (Yes in Step S44), then interpreter control unit 109 updates the minimum forecast error, the index value of the fusion block and the fusion flag value (Step S45). Then, the interpreter control unit 109 adds a value of "1" to the fusion block candidate value index (Step S46), and processing is repeated from step S42 to step S46. If the calculated cost of the merger block candidate is not less than the minimum forecast error (Not in Step S44), then the update process for step S45 is not carried out, but Step 46 is carried out, and the processing is repeated from step S42 to step S46. In this case, in Step S42, if the fusion block candidate value index is not less than the number of fusion block candidates (Not in Step S42), in other words, if there is no fusion block candidate then, the interpreter control unit 109 eventually determines the fusion flag finally on the left and the fusion block index value (Step S47). In accordance with the present embodiment of the present invention, a new fusion block candidate from the bidirectional prediction is calculated from the fusion block candidates, to improve the coding efficiency. More specifically, based on the fu- block blocks are calculated from the neighboring blocks and the colocalized block, (a) a motion vector and an index value of the reference image of the forecast direction 0 and (b ) a motion vector and an index value from the reference image of the forecast direction 1 are combined to generate a combined fusion block from the bidirectional forecast, and then added to the candidate list of the fusion block. As a result, the efficiency of the coding can be improved. In addition, if there is an unavailable candidate on the merger block candidate list, a combined merger block is generated and the unavailable candidate is replaced by the combined merger block. In this way, the coding efficiency can be improved without increasing a maximum number of fusion block candidates. It should be noted that it has been described in the present embodiment that the fusion flag is always added to a bit stream in the fusion mode, but the present invention is not limited to what has been said above. For example, it is also possible that she will be forced to select the fusion mode according to a shape or similar to the current block. In this case, it is possible that a quantity of information is reduced by not adding the fusion flag to the bit stream. It should be noted that it has been described in the present embodiment that, in the fusion mode, at least one motion vector and at least one index value of the reference image are copied from a neighboring block of the current block and then used to encode the current block, but the present invention is not limited to what has been said above. For example, the following is also possible. As in the fusion mode, using the generated fusion block candidates as shown in FIG. 6, at least one motion vector and at least one index value of the reference image are copied from a neighboring block of the current block and then used to encode the current block. As a result, if each of the current block's prediction error data is 0, an omission flag is set to 1 and added to the bit stream. On the other hand, if each of the prediction error data is not 0, the default flag is set to 0 and the default flag and the prediction error data are added to the bit stream (fusion default mode) . It should also be noted that it has been described in the present embodiment that, in the fusion mode, at least one motion vector and at least one index value of the reference image are copied from a neighboring block of the current block and then used to encode the current block, but the present invention is not limited to what has been said above. For example, it is also possible for a motion vector in motion vector estimation mode to be encoded using the generated fusion block candidate list as shown in FIG. 6. More specifically, it is possible for a motion vector from a fusion block candidate designated by the fusion block index value to be subtracted from the motion vector in the motion vector estimation mode, to obtain a difference, and that the difference and the index value of the fusion block candidate are added to the bit stream. In addition, the following option is also possible. Using an index value of the reference image RefIdx_ME of the motion estimation mode and an index value of the reference image RefIdx_Merge of the merger block candidate, the scaling is performed in an MV_Merge motion vector of the block candidate. Fusion. Then, a scaled motion vector MV_Merge from the scaled fusion block candidate subtracted from the motion vectors in motion estimation mode to obtain a difference. The difference and index of the merge block candidate value is added to the bit stream. This scaling can be performed using the following Equation 2. scaledMV_Merge = MV_Mergex (POC (RefIdx_ME) -curPOC) / (POC (RefIdx_Merge) -curPOC) Equation 2 In this case, POC (RefIdx_ME) represents the location in the order of display of a reference image indicated by the index value of the reference image RefIdx_ME, POC (RefIdx_Merge) represents a location in the order of display of a reference image indicated by the value of reference image index RefIdx_Merge, and curPOC represents a location in the display order of an image to be encoded. Mode 2 FIG. 10 is a block diagram showing a structure of a moving image decoding apparatus using a moving image decoding method according to embodiment 2 of the present description. As shown in FIG. 10, a moving image decoding apparatus 200 includes a variable length decoding unit 201, an inverse quantization unit 202, an inverse orthogonal transformation unit 203, a block memory 204, a frame memory 205, an intraprevision unit 206, an interpreter unit 207, an interpreter control unit 208, a fusion block candidate calculation unit 209, a colPic memory 210, an adder 211, and a switch 212. The variable-length decoding unit 201 performs variable-length decoding on an input bit stream to obtain information about the image type, fusion flag and fusion block index, and a decoded bit stream with length variable. The inverse quantization unit 202 inversely quantifies the variable length decoded bit stream. The reverse orthogonal transformation unit 203 transforms the bit stream, inversely quantified from a frequency domain to an image domain, to generate the image prediction error data. The memory of block 204 maintains an image sequence generated by adding the image preview error data to the forecast image block by block. Frame memory 205 maintains the image sequence, frame by frame. The forecast unit 206 performs intra-forecast in the image sequence stored in the memory of the block 204 block by block, to generate the image data of the forecast of a current block to be decoded. The interpretation unit 207 performs interpretation on the image sequence stored in the frame-by-image memory, to generate the image data of the preview of the current block to be decoded. The fusion block candidate calculation unit 209 derived fusion block candidate fusion mode, using colPic information such as the motion vectors of neighboring blocks and a colocalized block stored in colPic 210 memory for the current block. In addition, the fusion block candidate calculation unit 209 assigns each of the fusion-derived blocks with a corresponding fusion block index value. Then, the fusion block candidate calculation unit 209 provides the fusion block candidates and the fusion block index values for the interpreter control unit 208. If the fusion flag decoded by the variable length decoding unit 210 is "0", in other words, if the prediction mode is not the fusion mode, the interpreter control unit 208 generates the interpreter image using the decoded information of the motion estimation mode. In addition, if the fusion flag is "1", in other words, if the forecast mode is fusion mode, then the interpreter control unit 208 determines a motion vector and an index value of the image of reference that should be used in interpreting from the plurality of fusion block candidates, based on the decoded fusion block index value, to generate an interprevision image. In addition, the interpreter control unit 208 provides colPic 210 memory with colPic information, which includes the motion vector and the like of the current block. Adder 211 adds the forecast data generated by the intraprevision unit 206 or the interpreter unit 207 to the forecast error data, provided from the reverse orthogonal transformation unit 203, to generate a decoded image sequence. FIG. 11 is a flow chart of the summary of a processing flow of the moving image decoding method according to the present embodiment. The variable length decoding unit 201 decodes a fusion flag from a bit stream (Step S51). Interpretation control unit 208 determines whether or not the fusion flag is "1" (Step S52). As a result, if the fusion flag is "1" (Yes in Step S52), then the fusion block candidate calculation unit 209 specifies fusion block candidates from neighboring blocks and a colocalized block of a current block to be decoded (Step S53). In the same method as shown in FIG. 8, the fusion block candidate calculation unit 209 generates a combined fusion block and updates the fusion block candidate list (Step S54). Likewise, the encoding processing, for example, the candidate list of the fusion block shown in FIG. 6 is generated from the candidate list of the fusion block shown in FIG. 3B. Interpretation control unit 208 determines a fusion block from which at least one motion vector and at least one index value of the reference image are copied according to the index value of the fusion block decoded by the decoding of variable length 201, and generates an interpretation image using the determined fusion block (Step S55). On the other hand, in Step S52, if the fusion flag is "0", then the interpreter control unit 208 generates interpreter image using the motion vector estimation mode information, which is decoded by the motion unit. variable length decoding 201 (Step S56). It should be noted that if the number of fusion block candidates specified or generated in steps S53 and S54 is one, it is possible not to decode a fusion block index value but rather to estimate the fusion block index value as 0. In accordance with the present embodiment of the present invention, a new fusion block of the bidirectional prediction is calculated from the fusion block candidates, to properly decode a bit stream with improved encoding efficiency. More specifically, based on the fusion block candidates calculated by the neighboring blocks and the colocalized block, (a) a motion vector, the index value of the reference image of the forecast direction 0 and (b) a vector of motion and an index value of the reference image of the forecast direction 1 are combined to generate a combined fusion block from the bidirectional forecast and added to the candidate list of the fusion block. As a result, it is possible to decode the bit stream appropriately with improved encoding efficiency. In addition, if there is an unavailable candidate on the merger block candidate list, a combined merger block is calculated and the unavailable candidate is replaced by the combined merger block. In this way, it is possible to properly decode the bit stream with improved coding efficiency, without increasing the maximum number of fusion block candidates. Mode 3 The processing described in each of the modalities can simply be implemented in an independent computer system, by means of registration, in a registration medium, of a program for the implantation of the configurations of the moving image encoding method (encoding method). image) and the moving image decoding method (image decoding method) described in each of the modalities. The recording means can be any of the recording means, as long as the program can be recorded, such as a magnetic disk, an optical disk, an optical and magnetic disk, an IC card and a semiconductor memory. Subsequently, applications to the moving image encoding method (image encoding method) and the moving image decoding method (image decoding method) described in each of the modalities and the systems that use the same will be described. The system has the feature of having an image encoding and decoding device that includes an image encoding device that uses the image encoding method and an image decoding device that uses the image decoding method. Other system settings can be changed when appropriate depending on the case. FIG. 12 illustrates a global configuration of an ex100 content provisioning system for the deployment of content distribution services. The area providing communication services is divided into cells of desired size and base stations ex106, ex107, ex108, ex109, and ex110 which are fixed wireless stations arranged in each of the cells. The ex100 content delivery system is connected to devices, such as an ex111 computer, an ex112 personal digital assistant (PDA), an ex113 camera, an ex114 cell phone and an ex115 gaming machine, via the ex101 Internet, a Internet service provider ex102, a telephone network ex104, as well as the base stations ex106 to ex110, respectively. However, the configuration of the content provisioning system ex100 is not limited to the configuration shown in FIG. 12, and a combination in which any of the elements are connected is acceptable. In addition, each device can be directly connected to the ex104 telephone network, instead of being connected via the base stations ex106 to ex110 which are the fixed wireless stations. In addition, devices can be interconnected to each other via short distance and other wireless communication. The ex113 camera, like a digital video camera, is capable of capturing video. The ex116 camera, like a digital video camera, is capable of capturing both still images and videos. In addition, the ex114 cell phone can be a device that meets any of the standards, such as Global System for Mobile Communications (GSM), Code Division Multiple Access (CDMA), Broadband Code Division Access (W- CDMA), Long Term Evolution (LTE), and High Speed Packet Access (HSPA). Alternatively, the ex114 cell phone can be a Personal Handyphone System (PHS). In the ex100 content provisioning system, an ex103 flow server is connected to the ex113 camera and others via the ex104 phone network and the ex109 base station, which allows the distribution of images from a live show and others. Such distribution, a content (for example, live music video) captured by the user using the ex113 camera is encoded as described above in each of the modalities (that is, the camera functions as the image encoding device of the present invention), and the encoded content is transmitted to the flow server ex103. On the other hand, the flow server ex103 makes the flow distribution of the content data transmitted to the clients during their request. Customers include the computer ex111, the PDA ex112, the camera ex113, the cell phone ex114, and the game machine ex115 which are capable of decoding the encrypted data mentioned above. Each of the devices that was received, the distributed data decode and reproduce the encoded data (i.e., the devices function as the image decoding apparatus of the present invention). The captured data can be encoded by the ex113 camera or by the ex103 flow server that transmits the data, or the encoding processes can be shared between the ex113 camera and the ex103 flow server. Similarly, distributed data can be decoded by clients or the flow server ex103, or the decryption processes can be shared between clients and the flow server ex103. In addition, the still image and video data captured not only by the camera ex113, but also by the camera ex116 can be transmitted to the flow server ex103 via the computer ex111. The encoding processes can be carried out by the camera ex116, the computer ex111 or the flow server ex103, or even shared with them. In addition, the encoding and decoding processes can be carried out by an LSI ex500 usually included in each computer ex111 and device. The LSI ex500 can be configured with a single chip or a plurality of chips. The software for encoding and decoding video can be integrated with some kind of recording medium (such as a CD-ROM, a floppy disk and a hard disk) that is readable by the computer ex111 and others, and the encoding and decoding can be performed using the software. In addition, when the cell phone ex114 is equipped with a camera, the image data obtained by the camera can be transmitted. The video data is data encoded by the LSI ex500 included in the cell phone ex114. In addition, the ex103 flow server can consist of servers and computers, and can decentralize data and process decentralized data, record or distribute that data. As described above, customers can receive and reproduce the data encoded in the ex100 content provisioning system. In other words, customers can receive and decode the information transmitted by the user, and reproduce the decoded data in real time in the ex100 content provisioning system, so that the user who has no private rights and equipment can implement a personal transmission. In addition to the ex100 content delivery system example, at least one of a moving image encoding device (image encoding device) and a moving image decoding device (image decoding device) described in each of the modalities can be implanted in a digital transmission system ex200 illustrated in FIG. 13. More specifically, an ex201 broadcast station communicates or transmits, via radio waves to an ex202 broadcast satellite, multiplexed data obtained by multiplexing audio and other data in video data. The video data is data encoded by the motion picture encoding method described in each of the embodiments (i.e., data encoded by the image encoding apparatus of the present invention). During the reception of the multiplexed data, the transmission satellite ex202 transmits radio waves to be transmitted. Then, an ex204 home antenna, with a satellite transmission receiving function, receives radio waves. Then, a device such as a television (receiver) ex300 and a decoder (STB) ex217 decode the received multiplexed data, and reproduce the decoded data (i.e., the device functions as the image encoding apparatus of the present invention). In addition, an ex218 reader / writer (i) reads and decodes the multiplexed data recorded on an ex215 recording medium, such as a DVD and a BD, or (i) encodes video signals on the ex215 recording medium, and in in some cases, it types the data obtained by multiplexing an audio signal in the encoded data. The reader / writer ex218 may include a moving image decoding device or a moving image encoding device as shown in each of the modalities. In this case, the reproduced video signals are displayed on the monitor ex219, and can be reproduced by another device or system using the recording medium ex215 in which the multiplexed data is registered. It is also possible to implant a moving image decoding device in the ex217 decoder connected to the ex203 cable for a cable television or for the ex204 antenna for satellite and / or terrestrial transmission and display of the video signals on the ex219 monitor of the ex300 television. A moving image decoding device can be installed not on the decoder, but on the ex300 television. FIG. 14 illustrates the television (receiver) ex300 that uses the moving image encoding method and the moving image decoding method, described in each of the modalities. The ex300 television includes: an ex301 tuner that obtains or provides multiple data obtained by multiplexing audio data in the video data, via the ex204 antenna or the ex203 cable, etc. receiving the transmission; a modulation / demodulation unit ex302 that demodulates the received multiplexed data or modulates the data into multiplexed data to be provided externally; and a multiplexing / demultiplexing unit ex303 that demultiplexes the modulated data that has been multiplexed into video data and audio data, or multiplexes the video data and audio data that has been encoded by an ex306 signal processing unit into Dice. The ex300 television also includes: an ex306 signal processing unit that includes an ex304 audio signal processing unit and an ex305 video signal processing unit that decodes audio and video data and encodes data from audio and video data, (which function as the image encoding device and the image decoding device), respectively; and an ex309 output unit that includes an ex307 speaker that provides the decoded audio signal, and an ex308 display unit that displays the decoded video signal, such as a screen. In addition, the ex300 television includes an ex317 interface unit that includes an ex312 operation record unit that receives an operation record from a user. In addition, the ex300 television includes an ex310 control unit that completely controls each constituent element of the ex300 television, and a unit with an ex311 power supply circuit that supplies power to each of the elements. Another unit other than the operation log ex312, the interface unit ex317 may include: a bridge ex313 that is connected to an external device, such as the reader / writer ex218; a slotted unit ex314 to allow the attachment of the recording medium ex216, such as an SD card; an ex315 driver to be connected to an external registration medium, such as a hard drive; and an ex316 modem to be connected to the telephone network. In this case, the ex216 recording medium can electrically record information using an element with non-volatile / volatile semiconductor memory for storage. The constituent elements of the ex300 television are connected to each other via a synchronous bus. First, the configuration in which the ex300 television decodes multiplexed data obtained externally via the ex204 antenna and others and reproduces the decoded data will be described. On the ex300 television, during a user operation via a remote control ex220 and others, the multiplexing / demultiplexing unit ex303 demultiplexes the multiplexed data that was demodulated by the modulation / demodulation unit ex302, under the control of the ex310 control unit. which includes a CPU. In addition, the audio signal processing unit ex304 decodes the demultiplexed audio data, and the video signal processing unit ex305 decodes the demultiplexed video data, using the decoding method described in each of the modalities, on the ex300 television. The output unit ex309 provides the video signal and the audio signal respectively, which are decoded exter- nally. When the output unit ex309 provides the video signal and the audio signal, the signals can be kept temporarily in the temporary stores ex318 and ex319 and others, so that the signals are reproduced in synchronization with each other. In addition, instead of a transmission and others, the ex300 television can read multi-plex data through recording means ex215 and ex216, such as a magnetic disk, an optical disk and an SD card. Then, the configuration, in which the ex300 television encodes an audio signal and a video signal, and transmits the data externally or types the data into a recording medium, will be described. On the ex300 television, during a user operation via the remote control ex220 and others, the audio signal processing unit ex304 encodes an audio signal and the video signal processing unit ex305 encodes a video signal, under the control of the control unit ex310 using the coding method described in each of the modalities. The multiplexing / demultiplexing unit ex303 multiplexes the video signal and the encoded audio signal, and provides the resulting signal externally. When the multiplexing / demultiplexing unit ex303 multiplexes the video signal and the audio signal, the signals can be temporarily stored in the temporary stores ex320 and ex321 and others, so that the signals are reproduced in synchronization with each other. In this case, temporary storage ex318, ex319, ex320, and ex321 can be plural as illustrated, or at least one temporary storage can be shared on television ex300. In addition, data can be stored in temporary storage so that overflow and underflow of the system between the modulation / demodulation unit ex302 and the multiplexing / demultiplexing unit ex303, for example, can be avoided. In addition, the ex300 television can include a setting to receive AV input from a microphone or camera other than the setting to obtain audio and video data from the broadcast or a recording medium, and it can encode the data obtained . Although the ex300 television can encode, multiplex and provide external data in the description, it may also be able to only receive, decode and provide external data, without encoding, multiplexing and providing that external data. In addition, when the ex218 reader / writer reads or types multiplexed data from or on a recording medium, one from the ex300 television and the ex218 reader / writer can decode or encode the multiplexed data, and the ex300 television and the reader / recorder ex218 can share the decoding or encoding. As an example, FIG. 15 illustrates the configuration of an ex400 information recording / playback unit when data is read or typed from or on an optical disc. The information recording / reproduction unit ex400 includes the constituent elements ex401, ex402, ex403, ex404, ex405, ex406 and ex407 which will be described later. The ex401 optical head radiates a laser spot on a recording surface of the recording medium ex215 which is an optical disc for entering information and detecting the reflected light from the recording surface of the recording medium ex215 to read the information. The ex402 modulation registration unit electrically drives a semiconductor laser included in the ex401 optical head and modulates the laser light according to the recorded data. The reproduction demodulation unit ex403 amplifies a reproduction signal obtained through the electrical detection of reflected light from the recording surface using a photo detector included in the optical head ex401 and demodulates the reproduction signal by separating a component from the registered signal in the registration medium ex215 to reproduce the necessary information. The temporary storage ex404 temporarily maintains the information to be registered in the ex215 recording medium and the information reproduced from the ex215 recording medium. The disc motor ex405 moves the recording medium ex215. The ex406 servo control unit moves the ex401 optical head to a predetermined area for recording information and at the same time controls the ex405 disk motor rotation drive to follow the laser point. The ex407 system control unit completely controls the ex400 information recording / playback unit. The re-adding and typing processes can be implemented by the ex407 system control unit using various information stored in the ex404 temporary storage and generating and adding new information when necessary, and by the ex402 modulation registration unit, the reproduction demodulation unit. ex403, and the servo control unit ex406 that record and reproduce information through the optical head ex401 while being operated in a coordinated manner. The control unit of the ex407 system includes, for example, a microprocessor and performs processing by impelling a computer to run a reading and typing program. Although the ex401 optical head radiates a laser spot in the description, it can record high density using near-field light. FIG. 16 illustrates the recording medium ex215 which is the optical disk. On the registration surface of the recording medium ex215, orientation slots are formed in a spiral and an information registration area ex230 previously records address information indicating an absolute position on the disk according to the change in the shape of the orientation slots . This address information includes information for determining the positions of record blocks ex231 that are the unit for data recording. The reproduction of the information recording area ex230 and the re-addition of the address information on a device that records and reproduces data can lead to the determination of the positions of the registration blocks. In addition, the recording medium ex215 includes a data recording area ex233, an internal circumference area ex232, and an external circumference area ex234. The data logging area ex233 is an area for use in user data logging. The inner circumference area ex232 and the outer circumference area ex234, which are respectively inside and outside the data logging area ex233, are for specific use, except for user data logging. The information recording / reproduction unit 400 reads and records encoded audio, encoded video data, or multiplexed data obtained by multiplexing encoded audio and video data, from and in the data recording area ex233 of the means of registration ex215. Although an optical disc that has a single layer, such as a DVD and a BD is described as an example in the description, the optical disc is not limited to that type and can be an optical disc that has a multilayered structure and can be registered on a part other than the surface. In addition, the optical disc can have a structure for multidimensional recording / reproduction, such as recording information using colored lights and with different wavelengths in the same part of the optical disc and recording information that has different layers from various angles. In addition, an ex210 car that has an ex205 antenna can receive data from the ex202 satellite and others, and play video on a display device such as an ex211 car navigation system arranged in the ex210 car, on the digital transmission system. ex200. In this case, the configuration of the car navigation system ex211 will be a configuration, for example, that includes a GPS receiving unit from the configuration illustrated in FIG. 14. The same applies to the configuration of the computer ex111, cell phone ex114 and others. FIG. 17A illustrates the cell phone ex114 using the moving image encoding method and the described moving image decoding method of the modalities. The ex114 cell phone includes: an ex350 antenna for transmitting and receiving radio waves via the ex110 base station; the ex365 camera unit capable of capturing moving and still images; and an ex358 display unit such as a liquid crystal display for displaying data such as videos decoded and captured by the ex365 camera unit or received by the ex350 antenna. The ex114 cell phone further includes: a main body unit that includes an ex366 operating key unit; an ex357 audio output unit such as a speaker for the audio output; an ex356 audio input unit such as a microphone for audio input; an ex367 memory unit for storing captured videos or still images, recorded audio, encoded or decoded data from received video, still images, e-mails or others; and a slotted unit ex364 which is an interface unit for a recording medium that stores data in the same way as the ex367 memory unit. Next, an example of cell phone configuration ex114 will be described with reference to FIG. 17B. On the ex114 cell phone, an ex360 main control unit designed to completely control each main body unit that includes the ex358 display unit as well as the ex366 operating switch unit mutually connected, via an ex370 synchronous bus, to a unit with power supply circuit ex361, an operation record control unit ex362, the video signal processing unit ex355, a camera interface unit ex363, a liquid crystal display (LCD), control unit ex359, a modulation / demodulation unit ex352, a multiplexing / demultiplexing unit ex353, an audio signal processing unit ex354, a slotted unit ex364 and a memory unit ex367. When an end-of-call key or a power key is activated by user operation, the unit with power supply circuit ex361 supplies the respective units with power from a battery pack to activate the cell phone ex114. On the ex114 cell phone, the ex354 audio signal processing unit converts the audio signals collected by the ex356 audio input unit from voice talk mode to the digital audio signals under the control of the ex360 main control unit which includes a CPU, ROM, and RAM. Then, the ex352 modulation / demodulation unit performs scattered spectrum processing on digital audio signals, and the ex351 transmit and receive unit performs a digital-analog conversion and a frequency conversion on the data to transmit the resulting data. through the ex350 antenna. In addition, on the ex114 cell phone, the ex351 transmit and receive unit amplifies the data received by the ex350 antenna in voice chat mode and performs frequency conversion and analog-to-digital conversion on the data. Then, the ex352 modulation / demodulation unit performs reverse processing of scattered spectrum in the data, and the ex354 audio signal processing unit converts them into analog audio signals for sending them through the ex357 audio output unit. In addition, when an e-mail in data communication mode is transmitted, the text data from the e-mail registered by the operation of the operating key unit ex366 and others from the main body are sent to the main control unit ex360 via of the operation log control unit ex362. The main control unit ex360 causes the modulation / demodulation unit ex352 to perform scattered spectrum processing in text data, and the transmission and receiving unit ex351 performs digital-analog conversion and frequency conversion in the resulting data for transmit this data to the base station ex110 through the antenna ex350. When an e-mail is received, the processing which is approximately the reverse of the processing for the transmission of an e-mail is carried out on the received data, and the resulting data is supplied to the display unit ex358. When video, still images or video and audio in data communication mode are transmitted, the ex355 video signal processing unit compresses and encodes the video signals provided from the ex365 camera unit using an image encoding method in motion shown in each of the modalities (that is, it functions as the image encoding apparatus of the present invention), and transmits the encoded video data to the multiplexing / demultiplexing unit ex353. On the other hand, when the ex365 camera unit captures video, still images and the like, the ex354 audio signal processing unit encodes the audio signals collected by the ex356 audio input unit and transmits this encoded audio data to the unit multiplexing / demultiplexing ex353. The multiplexing / demultiplexing unit ex353 multiplexes the encoded video data provided from the ex355 video signal processing unit and the audio encoded data provided from the ex354 audio signal processing unit, using a predetermined method. Then, the modulation / demodulation unit (modulation / demodulation circuits unit) ex352 performs the processing of dispersed spectrum in the multiplexed data, and the transmission and receiving unit ex351 performs the digital-analog conversion and the frequency conversion in the data to transmit the resulting data through the ex350 antenna. When receiving data from a video file, which is connected to a web page and others in data communication mode, or when receiving an email with video and / or audio attached, in order to decode the multiplexed data received through the ex350 antenna, the ex353 multiplex / demultiplex unit demultiplexes the multiplexed data into a bit stream of video data and a bit stream of audio data and provides the video signal processing unit ex355 with the encoded data from video and the audio signal processing unit ex354 with the encoded audio data, through the ex370 synchronous bus. The video signal processing unit ex355 decodes the video signal using a motion picture decoding method that corresponds to the motion picture encoding method shown in each of the modalities (ie it functions as the decoding apparatus of the present invention), and then the display unit ex358 displays, for example, the video and still images included in the video file connected to the web page via the LCD control unit ex359. In addition, the ex354 audio signal processing unit decodes the audio signal, and the ex357 audio output unit provides audio. In addition, similarly to the ex300 television, a terminal such as the ex114 cell phone probably has 3 types of deployment settings that include not only (i) a transmit and receive terminal that includes both a coding device and a decoding apparatus, but it is also (ii) a transmitting terminal that includes only a coding apparatus and (iii) a receiving terminal that includes only a decoding apparatus. Although the digital broadcasting system ex200 receives and transmits the multiplexed data obtained by the multiplexing audio data in the video data in the description, the multiplexed data can be data obtained by multiplexing not the audio data, but character data related to the video in the video data, and they may not be multiplexed data, but the video data itself. Therefore, the moving image encoding method and the moving image decoding method, in each of the modalities, can be used in any of the devices and systems described here. In this way, the advantages described in each of the modalities can be obtained. In addition, the present invention is not limited to the illustrated modalities and various modifications and revisions are possible without departing from the scope of the present invention. Mode 4 Video data can be generated by switching, when necessary, between (i) the motion picture encoding method or the motion picture encoding apparatus shown in each of the modalities and (ii) a motion encoding method moving image or a moving image encoding device conforming to a different standard, such as MPEG-2, MPEG4-AVC, and VC-1. In this case, when a plurality of video data that conforms to the different standards is generated and is then decoded, the decoding methods need to be selected to suit the different standards. However, since it is not possible to detect which standard each plurality of video data to be decoded is suitable for, there is the problem of not being able to select an appropriate decoding method. In order to solve this problem, the multiplexed data obtained by the multiplexing audio data and others in the video data has a structure that includes identifying information indicating which standard the video data is suitable for. The specific structure of the multiplexed data, which includes the video data generated in the moving image encoding method and by the moving image encoding apparatus shown in each of the modalities, will be described later. The multiplexed data is a digital stream in the MPEG2- Transport Flow format. FIG. 18 illustrates a multiplexed data structure. As illustrated in FIG. 18, multiplexed data can be obtained by multiplexing at least one of a video stream, an audio stream, a presentation graphics stream (PG), and an interactive graphics stream. The video stream represents the primary video and the secondary video of a movie, the audio stream (IG) represents a part of primary audio and a part of secondary audio to be mixed with the part of primary audio, and a stream of presentation graphics represents the subtitles of the film. In this case, the primary video is a normal video to be displayed on a screen, and the secondary video is a video to be displayed in a smaller window of the primary video. In addition, the flow of interactive graphics represents an interactive screen to be generated by organizing the GUI components on a screen. The video stream is encoded by the moving image encoding method or by the moving image encoding device shown in each of the modalities, or by a moving image encoding method or by a moving image encoding device in accordance with a conventional standard, such as MPEG-2, MPEG4-AVC, and VC-1. The audio stream is encoded according to a standard, such as Dolby-AC-3, Dolby Digital Plus, MLP, DTS, DTS-HD, and linear PCM. Each stream included in the multiplexed data is identified by PID. For example, 0x1011 is allocated to the video stream to be used in the video of a movie, 0x1100 to 0x111F is allocated to the audio streams, 0x1200 to 0x121F is allocated to a presentation graphics stream, 0x1400 to 0x141F is allocated to the presentation stream interactive graphics, 0x1B00 to 0x1B1F are allocated in the video streams to be used for the secondary video of the movie, and 0x1A00 to 0x1A1F are allocated in the audio streams to be used for the secondary video that is to be mixed with the primary audio. FIG. 19 schematically illustrates how data is multiplexed. First, a video stream ex235 composed of video frames and an audio stream ex238 composed of audio frames are transformed into a stream of PES packets ex236 and a stream of PES packets ex239 and also into TS Packs ex237 and TS Packs ex240, respectively. Similarly, the data from a presentation graphics stream ex241 and the data from an interactive graphics stream ex244 are transformed into a stream of PES packets ex242 and a stream of PES packets ex245 and also into TS Packs ex243 and Packets of TS ex246, respectively. These TS Packets are multiplexed in a stream to obtain the multiplexed data ex247. FIG. 20 illustrates in more detail how a video stream is stored in a PES packet stream. The first bar in FIG. 20 shows a stream of the video frame in a stream of video. The second bar shows the flow of PES packets. As indicated by the arrows denoted as yy1, yy2, yy3, and yy4 in FIG. 20, the video stream is divided into images such as the Is image, Bs image and Ps image, which are a video presentation unit, and the images are stored in a payload of each of the PES packages. Each of the PES packages has a PES head, and the PES head stores a Time Mark Display (PTS) indicating the image display time and a Time Mark Decoding (DTS) indicating the image decoding time. FIG. 21 illustrates a format of TS Packets that are about to be finally entered into the multiplexed data. Each TS packet is a 188-byte fixed-length packet that includes a 4-byte TS head that has information such as a PID to identify a stream and a 184-byte TS payload for data storage. PES packages are divided into TS payloads, respectively. When a BD ROM is used, each TS packet receives a 4-byte TP_Extra_Header, thus resulting in 192-byte source packets. The source packages are entered in the multiple data. TP_Extra_Cabeçote stores information such as an arrival time stamp - Arrival_Time_Stamp (ATS). ATS shows a transfer start time at which each of the TS packets must be transferred to a PID filter. The source packets are arranged in the multiplexed data as shown at the end of FIG. 21. Numbers that grow from the top of multiplexed data are called source packet numbers (SPNs). Each of the TS packages included in the multiplexed data includes not only audio, video, subtitle and other streams, but also a Program Association Table (PAT), a Program Table (PMT) and a Program Chronological Reference (PCR) ). PAT shows what a PID in a PMT used in multiplexed data indicates, and a PAT PID is recorded as zero. The PMT stores PIDs of the video, audio, subtitles and other streams included in the multiplexed data, and attribute information from the streams that correspond to the PIDs. PMT also has several descriptors for multiplexed data. Descriptors have information such as copy control information that shows whether copying of multiplexed data is allowed or not. The PCR stores STC time information that corresponds to an ATS that shows when a PCR packet is transferred to a decoder, in order to obtain synchronization between an arrival time stamp (ATC) which is a geometric time axis of the ATSs , and a system time stamp (STC) which is a geometric time axis PTSs and DTSs.A FIG. 22 illustrates in detail the data structure of a PMT. A PMT header is placed at the top of the PMT. The PMT header describes the extent of data included in the PMT and others. A plurality of descriptors referring to the multiplexed data is arranged after the PMT header. Information such as copy control information is described in the descriptors. After the descriptors, a plurality of flow information referring to the flows included in the multiple data is displayed. Each stream information includes stream descriptors, which describe information such as a stream type to identify a stream's compression codec, a stream PID, and stream attribute information (such as a frame rate or a aspect ratio). The flow descriptors are equal in number to the number of flows in the multiplexed data. When multiplexed data is recorded on one recording medium and others, it is recorded together with information about the multiplexed data files. Each of the information about the multiple data files is management information for the multiplexed data as shown in FIG. 23. The information about the multiplexed data files does not correspond to the multiplexed data and each file includes information about the multiplexed data, flow attribute information and a record map. As illustrated in FIG. 23, multiplexed data includes a system fee, a start time for playback, and an end time for playback. The system rate indicates the maximum transfer rate at which a target system decoder, which will be described later, transfers the multiplexed data to a PID filter. The ATS intervals included in the multiplexed data are set to values less than the system rate. The start time of playback indicates the PTS in a video frame in the header of the multiplexed data. The frame interval is added to the PTS in a video frame at the end of the multiplexed data, and the PTS is defined at the end of playback time. As shown in FIG. 24, the attribute information is recorded in the flow attribute information, for each PID of each stream included in the multiplexed data. Each of the attribute information has different information, it will depend on whether the corresponding stream is a video stream, an audio stream, a presentation graphics stream or an interactive graphics stream. Each of the video stream's attribute information contains information that includes what type of compression codec is used to compress the video stream and resolution, aspect ratio and frame rate of the image data that is included in the stream of video. Each of the audio stream's attribute information contains information that includes what type of compression codec is used to compress the audio stream, how many channels are included in the audio stream, what language the audio stream supports, and how high the sampling frequency is. The video stream attribute information and the audio stream attribute information are used to initialize a decoder before the player reproduces the information again. In the present modality, the multiplexed data to be used is of a type of flow included in the PMT. In addition, when multiplexed data is recorded on a recording medium, the video stream attribute information included in the information about the multiplexed data is used. More specifically, the motion picture encoding method or the motion picture encoding apparatus described in each of the modalities includes a step or a unit for allocating unique information that indicates the video data generated by the motion coding method moving image or by the moving image encoding device in each of the modalities, in the type of stream included in the PMT or in the attribute information of the video stream. With the configuration, the video data generated by the moving image encoding method or by the moving image encoding apparatus described in each of the modalities can be distinguished from the video data that conform to another standard. In addition, FIG. 25 illustrates the steps of the moving image decoding method according to the present modality. In Step exS100, the type of stream included in the PMT or in the video stream attribution information is obtained from the multiplexed data. Then, in Step exS101, it is determined whether the type of the stream or the attribute information of the video stream indicates whether or not the multiplexed data is generated by the moving image encoding method or the moving image encoding apparatus in motion. each of the modalities. When it is determined that the type of stream or the video stream attribute information indicates that the multiplexed data is generated by the moving image encoding method or by the moving image encoding apparatus in each of the modalities, in Step exS102 , decoding is carried out using the moving image decoding method in each of the modalities. In addition, when the stream type or video stream attribute information indicates compliance with conventional standards, such as MPEG-2, MPEG4-AVC, and VC-1, in Step exS103, decoding is performed using a moving image decoding method, in accordance with conventional standards. Therefore, the allocation of a new unique value in the stream type or in the video stream attribute information allows verification of whether the moving image decoding method or a moving image decoding device, which is described in each of the modalities, you may or may not perform decoding. Even when those multiplexed data that conform to a different standard, an appropriate method of decoding or apparatus can be selected. In this way, it becomes possible to decode the information without any error. In addition, the motion picture encoding method or apparatus, or the motion picture decoding method or apparatus in the present embodiment can be used in the devices and systems described above. Mode 5 Each of the moving image encoding method, the moving image encoding device, the moving image decoding method and a moving image decoding device in each of the modalities is typically obtained in the form of a integrated circuit or a Large Scale Integrated circuit (LSI). As an example of the LSI, FIG. 26 illustrates the configuration of the LSI ex500 that is done on a chip. The LSI ex500 includes the elements ex501, ex502, ex503, ex504, ex505, ex506, ex507, ex508 and ex509, which will be described below, and the elements are connected to each other via an ex510 bus. The unit with power supply circuit ex505 is active when it supplies power to each of the elements and when it is turned on. For example, when encoding is done, the LSI ex500 receives an AV signal from an ex117 microphone, the ex113 camera, and others via an IO ex509 AV under the control of an ex501 control unit that includes the ex502 CPU, a memory controller ex503, a flow controller ex504, and a drive frequency control unit ex512. The received AV signal is temporarily stored in an external memory ex511, such as an SDRAM. Under the control of the ex501 control unit, the stored data is segmented into portions of data according to the amount and processing speed to be transmitted to an ex507 signal processing unit. The signal processing unit ex507 then encodes an audio signal and / or a video signal. In this case, the encoding of the video signal is the encoding described in each of the modalities. In addition, the ex507 signal processing unit sometimes multiplexes the encoded audio data and the encoded video data, and an IO stream ex506 provides the externally multiplexed data. The data provided that has been multiplexed is transmitted to the base station ex107, or entered into the recording medium ex215. When the data series have been multiplexed, the data must be temporarily stored in temporary storage ex508 for the data series to be synchronized with each other. Although the ex511 memory is an element located outside the LSI ex500, it can be included in the LSI ex500. Temporary storage ex508 is not limited to temporary storage, but can be composed of temporary storage. In addition, the LSI ex500 can be made on a chip or a plurality of chips. In addition, although the ex501 control unit includes the ex502 CPU, an ex503 memory controller, the flow controller ex504, the drive frequency control unit ex512, the configuration of the ex501 control unit is not limited to this. For example, the signal processing unit ex507 can also include the CPU. The inclusion of another CPU in the signal processing unit ex507 can improve the processing speed. In addition, as another example, the ex502 CPU may serve as or be part of the ex507 signal processing unit, and, for example, may include an audio signal processing unit. In such a case, the control unit ex501 includes the signal processing unit ex507 or the CPU ex502 which includes a part of the signal processing unit ex507. The name used here is LSI, but it can also be called IC, LSI system, super LSI or ultra LSI depending on the degree of integration. In addition, ways of achieving integration are not limited to LSI and a processor with a special or general purpose circuit and so on can also achieve integration. Field Programmable Logic Gates (FPGA) that can be programmed after LSI fabrication or a reconfigurable processor that allows reconfiguration of the connection or configuration of an LSI can be used for the same purpose. In the future, with the advancement in semiconductor technology, an entirely new technology could replace LSI. Functional blocks can be integrated using such technology. The possibility is that the present invention will be applied to that technology. Mode 6 When the video data generated by the moving image encoding method or by the moving image encoding apparatus described in each of the modalities are decoded, in comparison to the time when the video data that conform to a conventional standard, such as MPEG-2, MPEG4-AVC, and VC-1 are decoded, the amount of processing is likely to increase. As such, the LSI ex500 needs to be set at a higher trigger frequency than the CPU frequency ex502 to be used when video data that conforms to the conventional standard is decoded. However, when the trigger frequency is set higher, there is the problem of increased energy consumption. In order to solve this problem, a moving image decoding device, such as the television ex300 and the LSI ex500 is configured to determine which standard the video data suits, and to switch between the trigger frequencies accordingly. with the determined pattern. FIG. 27 illustrates the ex800 configuration in the present mode. A switching frequency switching unit ex803 sets a switching frequency at a higher switching frequency when the video data is generated by the moving image encoding method or the moving image encoding apparatus described in each of the modalities. Then, the drive frequency switching unit ex803 instructs the decoding processing unit ex801 which performs the motion picture decoding method described in each of the modalities to decode the video data. When the video data conforms to the conventional standard, the drive frequency switching unit ex803 sets a drive frequency at a lower drive frequency than the frequency of the video data generated by the moving image encoding method or by the moving image coding apparatus described in each of the modalities. Then, the drive frequency switching unit ex803 instructs the decoding processing unit ex802 that conforms to the conventional standard for decoding video data. More specifically, the drive frequency switching unit ex803 includes the CPU ex502 and the drive frequency control unit ex512 in FIG. 26. In this case, the ex801 decoding processing unit that executes the motion image decoding method described in each of the modalities and the ex802 decoding processing unit that conforms to the conventional standard corresponds to the signal ex507 in FIG. 26. The ex502 CPU determines which standard the video data matches. The drive frequency control unit ex512 then determines a drive frequency based on a signal from the CPU ex502. In addition, the ex507 signal processing unit decodes the video data based on the ex502 CPU signal. For example, the identifying information described in mode 4 is likely to be used to identify the video data. The identification information is not limited to what was described in modality 4 and it can be any of the information, as long as it indicates the standard to which the video data is suitable. For example, when it is possible to determine to what standard the video data fits on the basis of an external signal to determine whether the video data is used for a television or a disc, etc., the determination can be made on the basis of such an external signal. In addition, the ex502 CPU selects a drive frequency based, for example, on a lookup table in which the video data patterns are associated with the drive frequencies, as shown in FIG. 29. The activation frequency can be selected by storing the lookup table in temporary storage ex508 and in an internal memory of an LSI, and with reference to the lookup table made by CPU ex502. FIG. 28 illustrates the steps for executing a method in the present embodiment. First, in Step exS200, the signal processing unit ex507 obtains identification information from the multiple data. Then, in Step exS201, the CPU ex502 determines whether the video data is generated or not by an encoding method and by the encoding device described in each of the modalities, based on the identification information. When the video data is generated by the moving image coding method and the moving image coding apparatus described in each of the modalities, in Step exS202, the CPU ex502 transmits a signal to define the trigger frequency in a higher drive frequency for the drive frequency control unit ex512. The drive frequency control unit ex512 then sets the drive frequency at the highest drive frequency. On the other hand, when the identification information indicates that the video data conforms to the conventional standard, such as MPEG-2, MPEG4-AVC, and VC-1, in Step exS203, the CPU ex502 transmits a signal to define the frequency switching frequency at a lower switching frequency for the switching frequency control unit ex512. The drive frequency control unit ex512 then sets the drive frequency at the drive frequency lower than the frequency in the case where the video data is generated by the moving image encoding method and the video encoding device. moving image described in each mode. In addition, together with switching the drive frequencies, the energy conservation effect can be improved by changing the voltage to be applied to the LSI ex500 or to a device that includes the LSI ex500. For example, when the trigger frequency is set lower, the voltage to be applied to the LSI ex500 or the device that includes the LSI ex500 is likely to be set at a lower voltage than the value in the case where the trigger frequency is set taller. In addition, when the amount of processing for decoding is higher, the trigger frequency can be set higher, and when the amount of processing for decoding is lower, the trigger frequency can be set lower as the method. to set the trigger frequency. Thus, the definition of the method is not limited to what was described above. For example, when the amount of processing for decoding MPEG4-AVC video data is greater than the amount of processing for decoding video data generated by the moving image coding method and the device of the moving image coding described in each of the modalities, the activation frequency is probably defined in the reverse order to the definition described above. In addition, the method for setting the trigger frequency is not limited to the method for setting the lowest trigger frequency. For example, when the identification information indicates that the video data is generated by the moving image encoding method and the moving image encoding apparatus described in each of the modalities, the voltage to be applied to the LSI ex500 or the device that includes the LSI ex500 is probably set higher. When the identifying information indicates that the video data conforms to the conventional standard, such as MPEG-2, MPEG4-AVC, and VC-1, the voltage to be applied to the LSI ex500 or the device that includes the LSI ex500 is likely defined lower. As another example, when the identification information indicates that the video data is generated by the moving image encoding method and the moving image encoding apparatus described in each of the modalities, the activation of the CPU ex502 probably does not have to be suspended. When identifying information indicates that the video data conforms to the conventional standard, such as MPEG-2, MPEG4-AVC, and VC-1, the ex502 CPU is likely to be suspended at a certain time because the ex502 CPU have extra processing capacity. Even when the identification information indicates that the video data is generated by the moving image encoding method and by the moving image encoding device described in each of the modalities, in the case where the ex502 CPU has extra capacity of processing, the ex502 CPU is likely to be suspended at a certain time. In such a case, the suspension time is likely to be less than the time in the event that the identifying information indicates that the video data conforms to the conventional standard, such as MPEG-2, MPEG4-AVC, and VC-1. Consequently, the energy conservation effect can be improved by changing between the triggering frequencies according to the standard to which the video data is suited. In addition, when the LSI ex500 or the device that includes the LSI ex500 is activated using a battery and the life of that battery can be extended with the energy conservation effect. Mode 7 There are cases where a plurality of video data that conforms to different standards is provided to devices and systems, such as a television and a cell phone. To allow the decoding of the plurality of video data that conforms to different standards, the signal processing unit ex507 of the LSI ex500 needs to conform to different standards. However, the problems of increasing the scale of the LSI ex500 circuit and increasing the cost arise with the individual use of the ex507 signal processing units that conform to the respective standards. In order to solve this problem, a configuration is conceived in which the decoding processing unit for the implantation of the moving image decoding method described in each of the modalities and the decoding processing unit that conforms to the conventional standard, such as MPEG-2, MPEG4-AVC, and VC-1 are partially shared. Ex900 in FIG. 30A shows an example of the configuration. For example, the motion picture decoding method described in each of the modalities and the motion picture decoding method that suits MPEG4-AVC have, in part, the processing details, such as entropy coding, inverse quantization, unlock filtering and motion compensated prediction. The processing details to be shared probably include the use of the ex902 decoding processing unit that is suitable for MPEG4-AVC. On the other hand, a dedicated ex901 decoding processing unit is probably used for another unique processing of the present invention. Since the present invention is characterized by the processing of prediction in particular, for example, the dedicated decoding processing unit ex901 is used for the processing of prediction. Otherwise, the decoding processing unit is likely to be shared for entropy coding, reverse quantization, unlock filtering and motion compensated prediction or all processing. The decoding processing unit for implementing the moving image decoding method described in each of the modalities can be shared with the processing to be shared, and a dedicated decoding processing unit can be used for exclusive processing of MPEG4-AVC. In addition, ex1000 in FIG. 30B shows another example of this processing that is partially shared. This example uses a configuration that includes a dedicated decoding processing unit ex1001 that supports the exclusive processing of the present invention, a dedicated decoding processing unit ex1002 that supports the exclusive processing of another conventional standard, and the processing unit of decoding ex1003 which supports the processing to be shared between the motion picture decoding method in the present invention and the conventional motion picture decoding method. In this case, the dedicated decoding processing units ex1001 and ex1002 are not necessarily specialized for processing the present invention and for processing the conventional standard, respectively, and they may be those capable of implementing common processing. In addition, the configuration of this modality can be implemented by LSI ex500. Therefore, scaling down an LSI circuit and reducing costs are possible by sharing the decoding processing unit for the processing to be shared between the motion picture decoding method in the present invention and the decoding method. moving image decoding in accordance with the conventional standard. Industrial Applicability The moving image encoding method and the moving image decoding method according to the present description can be applied to any of the multimedia data and can improve the compression rate. For example, they are suitable as the moving image encoding method and the moving image decoding method for the accumulation, transmission, communications and the like using mobile phones, DVD players, personal computers and the like. Reference List 100 moving image encoding apparatus 101 orthogonal transformation unit 102 quantization unit 103 reverse quantization unit 104 reverse orthogonal transformation unit 105 block memory 106 frame memory 107 intraprevision unit 108 interpreter unit 109 unit of interpretation control 110 image type determination unit 111 fusion block candidate calculation unit 112 colPic memory 113 variable length encoding unit 114 subtractor 115 adder 116 switching unit 200 moving image decoding device 201 variable length decoding 202 inverse quantization unit 203 inverse orthogonal transformation unit 204 block memory 205 frame memory 206 intraprevision unit 207 interpreter unit 208 interpreter control unit 5 209 fusion block candidate calculation unit 210 memory colPic 211 ad 212 commutator
权利要求:
Claims (4) [0001] 1. Moving image decoding method for decoding a current block, characterized by the fact that it comprises: determining a first fusion block candidate in a list of fusion block candidates and a second fusion block candidate in the list of merger block candidates, the first merger block candidate having at least (i) a first motion vector that was used to decode a first neighbor block of the current block, (ii) a first forecast direction corresponding to the first motion and (iii) a first index value of the reference image to identify a first reference image corresponding to the first motion vector and the second fusion block candidate having at least (i) a second motion vector that was used to decode a second block neighboring the current block and different from the first block, (ii) a second forecast direction corresponding to the second motion vector and (iii) a second index value of the reference image to identify a second reference image corresponding to the second motion vector, where the second forecast direction is different from the first forecast direction and the list of merger block candidates includes a plurality of merger block candidates , one of which is selected to be used to decode the current block to generate a bidirectional forecast combined fusion block candidate by assigning the first motion vector and the first reference image index to the first forecast direction of the block candidate. combined fusion and assigning the second motion vector and the second reference image index to the second direction of the combined fusion block candidate; and decoding the current block using a fusion block candidate selected from the plurality of fusion block candidates, including the first fusion block candidate, the second fusion block candidate and the combined fusion block candidate. [0002] 2. Moving image decoding method according to claim 1, characterized by the fact that when the fusion block candidate that is selected to be used to decode the current block is the combined fusion block candidate, the combined fusion block candidate is used for the first forecast direction and the second forecast direction. [0003] 3. Moving image decoding method according to claim 1, characterized by the fact that when the fusion block candidate that is selected to be used to decode the current block is the combined fusion block candidate, the first motion vector and the second motion vector of the combined fusion block candidate are used in a direction corresponding to the first forecast direction and a direction corresponding to the second forecast direction. [0004] 4. Moving image decoding device that decodes a current block, characterized by the fact that it comprises: a determination unit configured to determine a first fusion block candidate from a list of fusion block candidates and a second fusion block in the fusion block candidate list, the first fusion block candidate having at least (i) a first motion vector that was used to decode a first neighbor block of the current block, (ii) a first direction of prediction corresponding to the first motion vector and (iii) a first index value of the reference image to identify a first reference image corresponding to the first motion vector and the second fusion block candidate with at least (i) a second vector of motion that was used to decode a second block neighboring the current block and different from the first block, (ii) a second forecast direction corresponding to the second vector d and motion and (iii) a second reference image index value to identify a second reference image corresponding to the second motion vector, where the second forecast direction is different from the first forecast direction and the list of block candidates fusion includes a plurality of fusion block candidates, one of which is selected to be used to decode the current block; a generation unit configured to generate a bidirectional predicted combined fusion block candidate, assigning the first motion vector and the first reference image index to the first forecasting direction of the new fusion block candidate and assigning the second vector motion and the second reference image index for the second forecast direction of the new merger block candidate; and a decoding unit configured to decode the current block using a fusion block candidate selected from a plurality of fusion block candidates, including the first fusion block candidate, the second fusion block candidate and the 10 candidate combined fusion block.
类似技术:
公开号 | 公开日 | 专利标题 JP6167409B2|2017-07-26|Image decoding method and image decoding apparatus US10951911B2|2021-03-16|Image decoding method and image decoding apparatus using candidate motion vectors BR112013023478B1|2021-01-19|moving image encoding method, moving image encoding apparatus, moving image decoding method, moving image decoding apparatus and moving image decoding and decoding apparatus EP2717579B1|2020-01-22|Video decoding method and video decoding device CA2850595C|2020-09-08|Picture coding method, picture coding apparatus, picture decoding method, and picture decoding apparatus EP3337172B1|2020-09-09|Image encoding method, image encoding apparatus EP2782341B1|2020-01-01|Image encoding method, image decoding method, image encoding device, and image decoding device EP2728878B1|2020-02-19|Image decoding method, image encoding method, image decoding device, image encoding device, and image encoding/decoding device EP2773111B1|2020-01-01|Image encoding method, image decoding method, image encoding device, and image decoding device CA2843560C|2020-09-22|Video encoding method, video encoding apparatus, video decoding method, video decoding apparatus, and video encoding/decoding apparatus EP2717576A1|2014-04-09|Video encoding method, video encoding device, video decoding method, video decoding device, and video encoding/decoding device EP3174298B1|2018-09-12|Video codec EP2824920A1|2015-01-14|Method for coding video, method for decoding video, device for coding video, device for decoding video, and device for coding/decoding video KR20140018891A|2014-02-13|Video image encoding method, video image decoding method, video image encoding device, video image decoding device, and video image encoding/decoding device JPWO2012090495A1|2014-06-05|Image encoding method and image decoding method EP2645716A1|2013-10-02|Motion vector calculation method, image coding method, image decoding method, motion vector calculation device and image coding/decoding device
同族专利:
公开号 | 公开日 BR112013023478A2|2017-03-01| JP5837575B2|2015-12-24| US10382774B2|2019-08-13| US9872036B2|2018-01-16| US11012705B2|2021-05-18| US20200195954A1|2020-06-18| CA2830036A1|2012-10-18| CN103444181A|2013-12-11| ES2621231T3|2017-07-03| US9445120B2|2016-09-13| WO2012140821A1|2012-10-18| KR101935620B1|2019-01-04| US8982953B2|2015-03-17| RU2719308C2|2020-04-17| ES2685945T3|2018-10-15| SA112330447B1|2015-08-10| US10178404B2|2019-01-08| RU2016137964A3|2019-10-08| CN103444181B|2018-04-20| AR085995A1|2013-11-13| US20180103264A1|2018-04-12| JP2016015787A|2016-01-28| TW201246948A|2012-11-16| RU2013141795A|2015-05-20| KR20140010068A|2014-01-23| CA2830036C|2019-03-05| US10536712B2|2020-01-14| RU2016137964A|2018-12-13| PL3136727T3|2018-11-30| EP2698999A1|2014-02-19| US20120263235A1|2012-10-18| EP2698999A4|2014-03-05| PL2698999T3|2017-10-31| EP2698999B1|2017-01-04| US20160337660A1|2016-11-17| US20190132608A1|2019-05-02| TWI547148B|2016-08-21| US20210235106A1|2021-07-29| EP3136727A1|2017-03-01| US20190132607A1|2019-05-02| EP3136727B1|2018-06-13| RU2600936C2|2016-10-27| US20190075314A1|2019-03-07| JP6112320B2|2017-04-12| US10609406B2|2020-03-31| JPWO2012140821A1|2014-07-28| MX2013010231A|2013-10-25| US20150146783A1|2015-05-28|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 DE3883701T2|1987-10-30|1994-02-10|Nippon Telegraph & Telephone|Method and device for multiplexed vector quantification.| US5403479A|1993-12-20|1995-04-04|Zenon Environmental Inc.|In situ cleaning system for fouled membranes| FR2725577B1|1994-10-10|1996-11-29|Thomson Consumer Electronics|CODING OR DECODING METHOD OF MOTION VECTORS AND CODING OR DECODING DEVICE USING THE SAME| TW330976B|1995-12-18|1998-05-01|Lg Electronics Inc|Constant temperature dehumidifying device for air regulator and control method thereof| US5995080A|1996-06-21|1999-11-30|Digital Equipment Corporation|Method and apparatus for interleaving and de-interleaving YUV pixel data| JP3263807B2|1996-09-09|2002-03-11|ソニー株式会社|Image encoding apparatus and image encoding method| US6148026A|1997-01-08|2000-11-14|At&T Corp.|Mesh node coding to enable object based functionalities within a motion compensated transform video coder| JPH10224800A|1997-02-07|1998-08-21|Matsushita Electric Ind Co Ltd|Motion vector coding method and decoding method| JP4004653B2|1998-08-03|2007-11-07|カスタム・テクノロジー株式会社|Motion vector detection method and apparatus, and recording medium| US6192148B1|1998-11-05|2001-02-20|Winbond Electronics Corp.|Method for determining to skip macroblocks in encoding video| US6192080B1|1998-12-04|2001-02-20|Mitsubishi Electric Research Laboratories, Inc.|Motion compensated digital video signal processing| US6594313B1|1998-12-23|2003-07-15|Intel Corporation|Increased video playback framerate in low bit-rate video applications| JP4487374B2|1999-06-01|2010-06-23|ソニー株式会社|Encoding apparatus, encoding method, multiplexing apparatus, and multiplexing method| US6842483B1|2000-09-11|2005-01-11|The Hong Kong University Of Science And Technology|Device, method and digital video encoder for block-matching motion estimation| JP2002152750A|2000-11-10|2002-05-24|Matsushita Electric Ind Co Ltd|Motion vector detection method and device| ES2687176T3|2001-11-06|2018-10-24|Panasonic Intellectual Property Corporation Of America|Encoding method of moving images and decoding method of moving images| JP2004088722A|2002-03-04|2004-03-18|Matsushita Electric Ind Co Ltd|Motion picture encoding method and motion picture decoding method| JP4015934B2|2002-04-18|2007-11-28|株式会社東芝|Video coding method and apparatus| HUE044616T2|2002-04-19|2019-11-28|Panasonic Ip Corp America|Motion vector calculating method| TWI258991B|2002-04-19|2006-07-21|Matsushita Electric Ind Co Ltd|Motion vector derivation method| EP2271106B1|2002-04-19|2016-05-25|Panasonic Intellectual Property Corporation of America|Motion vector calculating method| US7154952B2|2002-07-19|2006-12-26|Microsoft Corporation|Timestamp-independent motion vector prediction for predictive and bidirectionally predictive pictures| US20040001546A1|2002-06-03|2004-01-01|Alexandros Tourapis|Spatiotemporal prediction for bidirectionally predictive pictures and motion vector prediction for multi-picture reference motion compensation| AU2003242037A1|2002-07-02|2004-01-23|Matsushita Electric Industrial Co., Ltd.|Image encoding method and image decoding method| AU2003244072B2|2002-07-11|2007-08-16|Godo Kaisha Ip Bridge 1|Filtering Strength Determination Method, Moving Picture Coding Method and Moving Picture Decoding Method| US8406301B2|2002-07-15|2013-03-26|Thomson Licensing|Adaptive weighting of reference pictures in video encoding| US7970058B2|2002-07-15|2011-06-28|Hitachi Consumer Electronics Co., Ltd.|Moving picture encoding method and decoding method| US7023921B2|2002-08-06|2006-04-04|Motorola, Inc.|Method and apparatus for determining block match quality| KR100506864B1|2002-10-04|2005-08-05|엘지전자 주식회사|Method of determining motion vector| KR100990829B1|2002-11-01|2010-10-29|파나소닉 주식회사|Motion picture encoding method and motion picture decoding method| CN101827265B|2002-11-25|2012-11-21|松下电器产业株式会社|Picture coding apparatus and method, picture decoding apparatus and method, and recording method| MY135449A|2003-02-18|2008-04-30|Nokia Corp|Picture coding method| KR100693669B1|2003-03-03|2007-03-09|엘지전자 주식회사|Determination of a reference picture for processing a field macroblock| US7266147B2|2003-03-31|2007-09-04|Sharp Laboratories Of America, Inc.|Hypothetical reference decoder| KR20060037352A|2003-07-15|2006-05-03|톰슨 라이센싱|Motion estimation with fast search block matching| US7426308B2|2003-07-18|2008-09-16|Microsoft Corporation|Intraframe and interframe interlace coding and decoding| KR100579542B1|2003-07-29|2006-05-15|삼성전자주식회사|Motion estimation apparatus considering correlation between blocks, and method of the same| CN100469142C|2003-08-05|2009-03-11|Nxp股份有限公司|Video encoding and decoding methods and corresponding devices| US8064520B2|2003-09-07|2011-11-22|Microsoft Corporation|Advanced bi-directional predictive coding of interlaced video| CN1225127C|2003-09-12|2005-10-26|中国科学院计算技术研究所|A coding/decoding end bothway prediction method for video coding| FR2860678A1|2003-10-01|2005-04-08|Thomson Licensing Sa|DIFFERENTIAL CODING METHOD| GB2407006A|2003-10-08|2005-04-13|Sony Uk Ltd|Communicating streamed payload data and packet based auxiliary data| JP3675464B2|2003-10-29|2005-07-27|ソニー株式会社|Moving picture coding apparatus and moving picture coding control method| TWI330976B|2003-12-05|2010-09-21|Trident Microsystems Far East|Method and apparatus for encoding/decoding dynamic graphic content| US7301482B1|2003-12-12|2007-11-27|Marvell International Ltd.|Circuits, architectures, systems, methods, algorithms and software for conditional modulation coding| JP2005184042A|2003-12-15|2005-07-07|Sony Corp|Image decoding apparatus, image decoding method, and image decoding program| US8190003B2|2004-01-14|2012-05-29|Samsung Electronics Co., Ltd.|Storage medium storing interactive graphics stream activated in response to user's command, and reproducing apparatus for reproducing from the same| KR100608050B1|2004-01-14|2006-08-02|삼성전자주식회사|Storage medium storing interactive graphics stream activated by user's demand, reproducing apparatus, and reproducing method therefor| EP1583367A1|2004-03-30|2005-10-05|Matsushita Electric Industrial Co., Ltd.|Motion estimation employing line and column vectors| DE602004030993D1|2004-04-30|2011-02-24|Panasonic Corp|Motion estimation using adaptive spatial refinement vectors| JP4145275B2|2004-07-27|2008-09-03|富士通株式会社|Motion vector detection / compensation device| TWI250423B|2004-07-30|2006-03-01|Ind Tech Res Inst|Method for processing video images| TWI268715B|2004-08-16|2006-12-11|Nippon Telegraph & Telephone|Picture encoding method, picture decoding method, picture encoding apparatus, and picture decoding apparatus| EP1638333A1|2004-09-17|2006-03-22|Mitsubishi Electric Information Technology Centre Europe B.V.|Rate adaptive video coding| JP4375305B2|2004-10-26|2009-12-02|ソニー株式会社|Information processing apparatus, information processing method, recording medium, and program| JP4148228B2|2005-02-10|2008-09-10|ソニー株式会社|Image recording apparatus, image reproduction control apparatus, image recording / reproduction control apparatus, processing method of these apparatuses, and program causing computer to execute the method| US7660354B2|2005-05-11|2010-02-09|Fang Shi|Temporal error concealment for bi-directionally predicted frames| CN101090491B|2006-06-16|2016-05-18|香港科技大学|Be used for the block-based motion estimation algorithm of the enhancing of video compress| US8761258B2|2005-06-17|2014-06-24|The Hong Kong University Of Science And Technology|Enhanced block-based motion estimation algorithms for video compression| US9661376B2|2005-07-13|2017-05-23|Polycom, Inc.|Video error concealment method| EP1753242A2|2005-07-18|2007-02-14|Matsushita Electric Industrial Co., Ltd.|Switchable mode and prediction information coding| US7697783B2|2005-07-26|2010-04-13|Sony Corporation|Coding device, coding method, decoding device, decoding method, and programs of same| US20070025444A1|2005-07-28|2007-02-01|Shigeyuki Okada|Coding Method| JP4570532B2|2005-08-02|2010-10-27|パナソニック株式会社|Motion detection device, motion detection method, integrated circuit, and program| US20070030894A1|2005-08-03|2007-02-08|Nokia Corporation|Method, device, and module for improved encoding mode control in video encoding| JP4401336B2|2005-08-31|2010-01-20|三洋電機株式会社|Encoding method| JP2007142637A|2005-11-16|2007-06-07|Matsushita Electric Ind Co Ltd|Image information encoder| CN101379835B|2006-02-02|2011-08-24|汤姆逊许可公司|Method and apparatus for motion estimation using combined reference bi-prediction| CN101379829B|2006-02-02|2016-05-18|汤姆逊许可公司|Be used for the method and apparatus of the adaptive weighted selection of motion compensated prediction| US20070200949A1|2006-02-21|2007-08-30|Qualcomm Incorporated|Rapid tuning in multimedia applications| JP4757080B2|2006-04-03|2011-08-24|パナソニック株式会社|Motion detection device, motion detection method, motion detection integrated circuit, and image encoding device| US7672377B2|2006-04-21|2010-03-02|Dilithium Holdings, Inc.|Method and system for video encoding and transcoding| EP2030450B1|2006-06-19|2015-01-07|LG Electronics Inc.|Method and apparatus for processing a video signal| KR20070120416A|2006-06-19|2007-12-24|엘지전자 주식회사|Method and apparatus for decoding a video signal, and method and apparatus for encoding a video signal| JP2008011455A|2006-06-30|2008-01-17|Sanyo Electric Co Ltd|Coding method| US8250618B2|2006-09-18|2012-08-21|Elemental Technologies, Inc.|Real-time network adaptive digital video encoding/decoding| DE102006043707A1|2006-09-18|2008-03-27|Robert Bosch Gmbh|Method for data compression in a video sequence| US8213509B2|2006-10-06|2012-07-03|Calos Fund Limited Liability Company|Video coding on parallel processing systems| US8325819B2|2006-10-12|2012-12-04|Qualcomm Incorporated|Variable length coding table selection based on video block type for refinement coefficient coding| US8599926B2|2006-10-12|2013-12-03|Qualcomm Incorporated|Combined run-length coding of refinement and significant coefficients in scalable video coding enhancement layers| US9319700B2|2006-10-12|2016-04-19|Qualcomm Incorporated|Refinement coefficient coding based on history of corresponding transform coefficient values| US8565314B2|2006-10-12|2013-10-22|Qualcomm Incorporated|Variable length coding table selection based on block type statistics for refinement coefficient coding| EP2079242A4|2006-10-30|2010-11-03|Nippon Telegraph & Telephone|Predictive reference information generation method, dynamic image encoding and decoding method, device thereof, program thereof, and storage medium containing the program| JP2008199587A|2007-01-18|2008-08-28|Matsushita Electric Ind Co Ltd|Image coding apparatus, image decoding apparatus and methods thereof| JP5025286B2|2007-02-28|2012-09-12|シャープ株式会社|Encoding device and decoding device| KR101403341B1|2007-03-28|2014-06-09|삼성전자주식회사|Method and apparatus for video encoding and decoding| BRPI0809512A2|2007-04-12|2016-03-15|Thomson Licensing|context-dependent merge method and apparatus for direct jump modes for video encoding and decoding| US20100086053A1|2007-04-26|2010-04-08|Panasonic Corporation|Motion estimation device, motion estimation method, and motion estimation program| TWI335183B|2007-05-03|2010-12-21|Nat Univ Chung Cheng| JP2008283490A|2007-05-10|2008-11-20|Ntt Docomo Inc|Moving image encoding device, method and program, and moving image decoding device, method and program| BRPI0810517A2|2007-06-12|2014-10-21|Thomson Licensing|METHODS AND APPARATUS SUPPORTING MULTIPASS VIDEO SYNTAX STRUCTURE FOR SECTION DATA| KR101495886B1|2007-07-19|2015-02-26|한국전자통신연구원|Method for generating downlink frame, and method for searching cell| CN101884219B|2007-10-16|2014-08-20|Lg电子株式会社|A method and an apparatus for processing a video signal| KR101228020B1|2007-12-05|2013-01-30|삼성전자주식회사|Video coding method and apparatus using side matching, and video decoding method and appartus thereof| CN101198064A|2007-12-10|2008-06-11|武汉大学|Movement vector prediction method in resolution demixing technology| KR20090095012A|2008-03-04|2009-09-09|삼성전자주식회사|Method and apparatus for encoding and decoding image using consecutive motion estimation| EP2266318B1|2008-03-19|2020-04-22|Nokia Technologies Oy|Combined motion vector and reference index prediction for video coding| KR101691199B1|2008-04-11|2016-12-30|톰슨 라이센싱|Method and apparatus for template matching prediction in video encoding and decoding| KR101596829B1|2008-05-07|2016-02-23|엘지전자 주식회사|A method and an apparatus for decoding a video signal| PT104083A|2008-06-02|2009-12-02|Inst Politecnico De Leiria|METHOD FOR TRANSCODING H.264 / AVC VIDEO IMAGES IN MPEG-2| EP2394431A4|2009-02-05|2013-11-06|Thomson Licensing|Methods and apparatus for adaptive mode video encoding and decoding| CN102883160B|2009-06-26|2016-06-29|华为技术有限公司|Video image motion information getting method, device and equipment, template construction method| KR101452859B1|2009-08-13|2014-10-23|삼성전자주식회사|Method and apparatus for encoding and decoding motion vector| US9060176B2|2009-10-01|2015-06-16|Ntt Docomo, Inc.|Motion vector prediction in video coding| CN102577389A|2009-10-16|2012-07-11|夏普株式会社|Video coding device and video decoding device| TWI566586B|2009-10-20|2017-01-11|湯姆生特許公司|Method for coding a block of a sequence of images and method for reconstructing said block| KR101459714B1|2009-10-28|2014-11-13|에스케이텔레콤 주식회사|Motion Vector Coding Method and Apparatus by Using Partitioning Space and Video Coding Method and Apparatus Using Same| KR101441905B1|2009-11-18|2014-09-24|에스케이텔레콤 주식회사|Motion Vector Coding Method and Apparatus by Using Candidate Predicted Motion Vector Set Selection and Video Coding Method and Apparatus Using Same| JPWO2011061880A1|2009-11-19|2013-04-04|三菱電機株式会社|Image encoding device, image decoding device, image encoding method, and image decoding method| WO2011064673A1|2009-11-30|2011-06-03|France Telecom|Method of and apparatus for encoding video frames, method of and apparatus for decoding video frames| CN101860754B|2009-12-16|2013-11-13|香港应用科技研究院有限公司|Method and device for coding and decoding motion vector| US9036692B2|2010-01-18|2015-05-19|Mediatek Inc.|Motion prediction method| EP2532159A1|2010-02-05|2012-12-12|Telefonaktiebolaget L M Ericsson |Selecting predicted motion vector candidates| US8995527B2|2010-02-19|2015-03-31|Qualcomm Incorporated|Block type signalling in video coding| CN102439978A|2010-03-12|2012-05-02|联发科技(新加坡)私人有限公司|Motion prediction methods| CN102210910B|2010-04-02|2013-02-13|重庆融海超声医学工程研究中心有限公司|Ultrasonic transducer| KR101752418B1|2010-04-09|2017-06-29|엘지전자 주식회사|A method and an apparatus for processing a video signal| HUE043816T2|2010-05-04|2019-09-30|Lg Electronics Inc|Method and apparatus for processing a video signal| US9124898B2|2010-07-12|2015-09-01|Mediatek Inc.|Method and apparatus of temporal motion vector prediction| PL2613535T3|2010-09-02|2021-10-25|Lg Electronics Inc.|Method for encoding and decoding video| US10104391B2|2010-10-01|2018-10-16|Dolby International Ab|System for nested entropy encoding| JP2012109720A|2010-11-16|2012-06-07|Panasonic Corp|Picture conversion device, picture reproduction device, and picture conversion method| US8824558B2|2010-11-23|2014-09-02|Mediatek Inc.|Method and apparatus of spatial motion vector prediction| US8976873B2|2010-11-24|2015-03-10|Stmicroelectronics S.R.L.|Apparatus and method for performing error concealment of inter-coded video frames| US8711940B2|2010-11-29|2014-04-29|Mediatek Inc.|Method and apparatus of motion vector prediction with extended motion vector predictor| US9049455B2|2010-12-28|2015-06-02|Panasonic Intellectual Property Corporation Of America|Image coding method of coding a current picture with prediction using one or both of a first reference picture list including a first current reference picture for a current block and a second reference picture list including a second current reference picture for the current block| EP3554079A1|2011-01-07|2019-10-16|LG Electronics Inc.|Method for encoding video information, method of decoding video information and decoding apparatus for decoding video information| US20130301734A1|2011-01-12|2013-11-14|Canon Kabushiki Kaisha|Video encoding and decoding with low complexity| US9319716B2|2011-01-27|2016-04-19|Qualcomm Incorporated|Performing motion vector prediction for video coding| US9066110B2|2011-03-08|2015-06-23|Texas Instruments Incorporated|Parsing friendly and error resilient merge flag coding in video coding| EP2687015A4|2011-03-14|2014-12-17|Mediatek Inc|Method and apparatus for deriving temporal motion vector prediction| US9648334B2|2011-03-21|2017-05-09|Qualcomm Incorporated|Bi-predictive merge mode based on uni-predictive neighbors in video coding| US9143795B2|2011-04-11|2015-09-22|Texas Instruments Incorporated|Parallel motion estimation in video coding| MX2013010231A|2011-04-12|2013-10-25|Panasonic Corp|Motion-video encoding method, motion-video encoding apparatus, motion-video decoding method, motion-video decoding apparatus, and motion-video encoding/decoding apparatus.| PL2717573T3|2011-05-24|2018-09-28|Velos Media International Limited|Image encoding method, image encoding apparatus, image decoding method, image decoding apparatus, and image encoding/decoding apparatus| MX2013012132A|2011-05-27|2013-10-30|Panasonic Corp|Image encoding method, image encoding device, image decoding method, image decoding device, and image encoding/decoding device.| US9485518B2|2011-05-27|2016-11-01|Sun Patent Trust|Decoding method and apparatus with candidate motion vectors| SG194746A1|2011-05-31|2013-12-30|Kaba Gmbh|Image encoding method, image encoding device, image decoding method, image decoding device, and image encoding/decoding device| KR101889582B1|2011-05-31|2018-08-20|선 페이턴트 트러스트|Video encoding method, video encoding device, video decoding method, video decoding device, and video encoding/decoding device| US9866859B2|2011-06-14|2018-01-09|Texas Instruments Incorporated|Inter-prediction candidate index coding independent of inter-prediction candidate list construction in video coding| CA3101406A1|2011-06-14|2012-12-20|Samsung Electronic Co., Ltd.|Method and apparatus for encoding motion information and method and apparatus for decoding same| US9131239B2|2011-06-20|2015-09-08|Qualcomm Incorporated|Unified merge mode and adaptive motion vector prediction mode candidates selection| KR102103682B1|2011-06-30|2020-04-22|가부시키가이샤 제이브이씨 켄우드|Image encoding device, image encoding method, image encoding program, image decoding device, image decoding method, and image decoding program| WO2013001803A1|2011-06-30|2013-01-03|株式会社Jvcケンウッド|Image encoding device, image encoding method, image encoding program, image decoding device, image decoding method, and image decoding program| MX347793B|2011-08-03|2017-05-12|Panasonic Ip Corp America|Video encoding method, video encoding apparatus, video decoding method, video decoding apparatus, and video encoding/decoding apparatus.| CN103907346B|2011-10-11|2017-05-24|联发科技股份有限公司|Motion vector predictor and method and apparatus for disparity vector derivation|US9066110B2|2011-03-08|2015-06-23|Texas Instruments Incorporated|Parsing friendly and error resilient merge flag coding in video coding| US9648334B2|2011-03-21|2017-05-09|Qualcomm Incorporated|Bi-predictive merge mode based on uni-predictive neighbors in video coding| MX2013010231A|2011-04-12|2013-10-25|Panasonic Corp|Motion-video encoding method, motion-video encoding apparatus, motion-video decoding method, motion-video decoding apparatus, and motion-video encoding/decoding apparatus.| US9247266B2|2011-04-18|2016-01-26|Texas Instruments Incorporated|Temporal motion data candidate derivation in video coding| PL2717573T3|2011-05-24|2018-09-28|Velos Media International Limited|Image encoding method, image encoding apparatus, image decoding method, image decoding apparatus, and image encoding/decoding apparatus| US9485518B2|2011-05-27|2016-11-01|Sun Patent Trust|Decoding method and apparatus with candidate motion vectors| MX2013012132A|2011-05-27|2013-10-30|Panasonic Corp|Image encoding method, image encoding device, image decoding method, image decoding device, and image encoding/decoding device.| SG194746A1|2011-05-31|2013-12-30|Kaba Gmbh|Image encoding method, image encoding device, image decoding method, image decoding device, and image encoding/decoding device| KR101889582B1|2011-05-31|2018-08-20|선 페이턴트 트러스트|Video encoding method, video encoding device, video decoding method, video decoding device, and video encoding/decoding device| CA3101406A1|2011-06-14|2012-12-20|Samsung Electronic Co., Ltd.|Method and apparatus for encoding motion information and method and apparatus for decoding same| US9866859B2|2011-06-14|2018-01-09|Texas Instruments Incorporated|Inter-prediction candidate index coding independent of inter-prediction candidate list construction in video coding| US9313494B2|2011-06-20|2016-04-12|Qualcomm Incorporated|Parallelization friendly merge candidates for video coding| JP5807621B2|2011-06-30|2015-11-10|株式会社Jvcケンウッド|Image encoding device, image encoding method, image encoding program, transmission device, transmission method, and transmission program| PL2728878T3|2011-06-30|2020-06-15|Sun Patent Trust|Image decoding method, image encoding method, image decoding device, image encoding device, and image encoding/decoding device| KR102103682B1|2011-06-30|2020-04-22|가부시키가이샤 제이브이씨 켄우드|Image encoding device, image encoding method, image encoding program, image decoding device, image decoding method, and image decoding program| JP5678924B2|2011-06-30|2015-03-04|株式会社Jvcケンウッド|Image decoding apparatus, image decoding method, and image decoding program, and receiving apparatus, receiving method, and receiving program| US9819963B2|2011-07-12|2017-11-14|Electronics And Telecommunications Research Institute|Inter prediction method and apparatus for same| MX347793B|2011-08-03|2017-05-12|Panasonic Ip Corp America|Video encoding method, video encoding apparatus, video decoding method, video decoding apparatus, and video encoding/decoding apparatus.| MX346819B|2011-08-29|2017-03-30|Ibex Pt Holdings Co Ltd|Apparatus for decoding merge mode motion information.| RU2622849C1|2011-09-09|2017-06-20|Кт Корпорейшен|Method and device for decoding video signal| MX2014003991A|2011-10-19|2014-05-07|Panasonic Corp|Image encoding method, image encoding device, image decoding method, and image decoding device.| KR20130050406A|2011-11-07|2013-05-16|오수미|Method for generating prediction block in inter prediction mode| KR20130050403A|2011-11-07|2013-05-16|오수미|Method for generating rrconstructed block in inter prediction mode| KR101934277B1|2011-11-28|2019-01-04|에스케이텔레콤 주식회사|Video Coding Method and Apparatus using Improved Merge| JP2015128252A|2013-12-27|2015-07-09|日本電信電話株式会社|Prediction image generating method, prediction image generating device, prediction image generating program, and recording medium| US10531116B2|2014-01-09|2020-01-07|Qualcomm Incorporated|Adaptive motion vector resolution signaling for video coding| JP6467787B2|2014-05-27|2019-02-13|株式会社リコー|Image processing system, imaging apparatus, image processing method, and program| US10412387B2|2014-08-22|2019-09-10|Qualcomm Incorporated|Unified intra-block copy and inter-prediction| US9918105B2|2014-10-07|2018-03-13|Qualcomm Incorporated|Intra BC and inter unification| KR102349788B1|2015-01-13|2022-01-11|인텔렉추얼디스커버리 주식회사|Method and apparatus for encoding/decoding video| US10812822B2|2015-10-02|2020-10-20|Qualcomm Incorporated|Intra block copy merge mode and padding of unavailable IBC reference region| KR20180031616A|2016-09-20|2018-03-28|주식회사 케이티|Method and apparatus for processing a video signal| MX2019007219A|2016-12-22|2019-09-05|Kt Corp|Video signal processing method and device.| WO2018141416A1|2017-02-06|2018-08-09|Huawei Technologies Co., Ltd.|Video encoder and decoder for predictive partitioning| US10904565B2|2017-06-23|2021-01-26|Qualcomm Incorporated|Memory-bandwidth-efficient design for bi-directional optical flow | JP6918661B2|2017-09-22|2021-08-11|キヤノン株式会社|Coding device, coding method and program| US11070797B2|2017-11-27|2021-07-20|Lg Electronics Inc.|Image decoding method and apparatus based on inter prediction in image coding system| WO2019195829A1|2018-04-06|2019-10-10|Arris Enterprises Llc|Reducing motion vector information transmission in bi-directional temporal prediction| US20190379901A1|2018-06-08|2019-12-12|Mediatek Inc.|Methods and apparatus for multi-hypothesis mode reference and constraints| WO2020102034A1|2018-11-14|2020-05-22|Tencent America LLC|Method and apparatus for video coding| US11234007B2|2019-01-05|2022-01-25|Tencent America LLC|Method and apparatus for video coding| CN113475069A|2019-03-12|2021-10-01|北京达佳互联信息技术有限公司|Method and apparatus for video coding and decoding for triangle prediction|
法律状态:
2017-06-27| B25A| Requested transfer of rights approved|Owner name: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AME | 2017-07-11| B25A| Requested transfer of rights approved|Owner name: SUN PATENT TRUST (US) | 2018-12-18| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]| 2020-04-22| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]| 2020-12-08| B09A| Decision: intention to grant [chapter 9.1 patent gazette]| 2021-01-19| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 28/02/2012, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US201161474507P| true| 2011-04-12|2011-04-12| US61/474,507|2011-04-12| PCT/JP2012/001351|WO2012140821A1|2011-04-12|2012-02-28|Motion-video encoding method, motion-video encoding apparatus, motion-video decoding method, motion-video decoding apparatus, and motion-video encoding/decoding apparatus| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|