专利摘要:
context state initialization and probability for context adaptive entropy encoding. in one example, a context adaptive entropy encoding apparatus may include an encoder configured to determine one or more initialization parameters for a context adaptive entropy encoding process based on one or more initialization parameter index values . the encoder can be further configured to determine one or more initial context states to initialize one or more contexts of the context adaptive entropy encoding process based on the initialization parameters. the encoder can still be configured to initialize contexts based on initial context states. in some examples, the initialization parameters can be included in one or more tables, where, to determine the initialization parameters, the encoder can be configured to map the initialization parameter index values to the initialization parameters in the tables. alternatively, the encoder can be configured to calculate initialization parameters using the initialization parameter index values and one or more formulas.
公开号:BR112014010052B1
申请号:R112014010052-7
申请日:2012-11-01
公开日:2021-07-20
发明作者:Marta Karczewicz;Xianglin Wang;Liwei GUO;Joel Sole Rojals
申请人:Qualcomm Incorporated;
IPC主号:
专利说明:

[0001] This application claims the benefits of US Provisional Application No. 61/555,469, filed November 3, 2011, US Provisional Application No. 61/556,808, filed November 7, 2011, US Provisional Application No. 61/ 557,785, filed November 9, 2011 and US Provisional Application No. 61/560,107, filed November 15, 2011, the entire contents of each of which are incorporated herein by reference. Technical Field
[0002] This description refers to entropy coding of video or similar data, and more particularly to context-adaptive entropy coding. Fundamentals
[0003] Digital video capabilities can be incorporated into a wide range of devices, including digital televisions, digital direct broadcast systems, wireless broadcast systems, personal digital assistants (PDAs), laptop and desktop computers, tablet computers, video readers. e-book, digital cameras, digital recording devices, digital media playback devices, video game devices, video game consoles, satellite radio phones and cell phones, so-called "smartphones", video teleconferencing devices , video sequencing devices, and the like. Digital video devices implement video compression techniques such as those described in the standards defined by MPEG-2, MPEG-4, ITU-T H.263, ITU-T H.264/MPEG-4, Part 10, Encoding Advanced Video Coding (AVC), the High Efficiency Video Coding (HEVC) standard currently under development, and extensions to such standards. Video devices can transmit, receive, encode, decode, and/or store digital video information more efficiently by implementing video compression techniques.
[0004] Video compression techniques perform spatial prediction (intra-picture) and/or temporal prediction (inter-picture) to reduce or remove the inherent redundancy of video sequences. For block-based video encoding, a video slice (ie a video frame or a portion of a video frame) can be divided into video blocks, which may also be referred to as tree blocks, units of coding (Cus) and/or coding nodes. Video blocks in an intracoded (I) slice of an image are encoded using spatial prediction with respect to reference samples in neighboring blocks in the same image. Video blocks in an intercoded (P or B) slice of an image can use spatial prediction with respect to reference samples in neighboring blocks in the same image prediction or temporal prediction with respect to reference samples in other reference images . Images can be referred to as frames, and reference images can be referred to as reference frames.
[0005] Spatial or temporal prediction results in a prediction block for a block to be encoded. Residual data represents pixel differences between the original block to be encoded and the prediction block. An intercoded block is coded according to a motion vector that points to a block of reference samples forming the prediction block, and the residual data indicating the difference between the coded block and the prediction block. An intracoded block is encoded according to an intracoding mode and the residual data. For additional compression, residual data can be transformed from the pixel domain into a transformation domain, resulting in residual transformation coefficients, which can then be quantized. The quantized transform coefficients, initially arranged in a two-dimensional array, can be digitized to produce a one-dimensional vector of transform coefficients, and entropy coding can be applied to achieve even more compression. summary
[0006] This description describes techniques for encoding data such as video data. For example, techniques can be used to encode video data, such as residual transform coefficients and/or other syntax elements, generated by video encoding processes. In particular, the description describes techniques that can promote efficient encoding of video data using context-adaptive entropy encoding processes. The description describes video encoding for illustration purposes only. As such, the techniques described in this description may be applicable to encoding other types of data.
[0007] As an example, the techniques in this description may allow an encoding system or device to encode various types of data, such as, for example, video data, more efficiently than when using other techniques. In particular, the techniques described here may allow the encoding system or device to have less complexity relative to other systems or devices when encoding data using a context-adaptive entropy encoding process, such as, for example, a adaptive binary arithmetic coding process (CABAC). For example, the techniques can reduce an amount of information stored within the encoding system or device and/or transmitted to or from the encoding system or device, for purposes of initializing one or more contexts of the context adaptive entropy encoding process . As an example, the amount of information can be reduced by storing and/or transmitting initialization parameter index values that indicate the initialization parameters used to initialize the contexts, rather than storing and/or transmitting the initialization parameters directly. .
[0008] Additionally, as another example, the techniques can improve data compression when the encoding system or device is configured to encode the data using the context adaptive entropy encoding process. For example, the techniques can improve data compression by allowing the encoding system or device to initialize one or more contexts from the context-adaptive entropy encoding process, so that the contexts include relatively more accurate initial probabilities compared to the probabilities initials determined using other context initialization techniques. In particular, contexts can be initialized based on temporal layer information associated with the data, using quantization parameter information and reference context state and various relationships, or using one or more probability offsets. Additionally, the techniques can further improve data compression by allowing the encoding system or encoding device to subsequently update the context probabilities so that the updated probabilities are relatively more accurate compared to the updated probabilities using other updating techniques. context probability, using the same or similar techniques as described above.
[0009] In one example, a context adaptive entropy encoding method may include determining one or more initialization parameters for a context adaptive entropy encoding process based on one or more initialization parameter index values . The method may also include determining one or more initial context states for initializing one or more contexts from the context adaptive entropy encoding process based on one or more initialization parameters. Additionally, the method may include initializing one or more contexts of the context adaptive entropy encoding process based on one or more initial context states.
[0010] In another example, an apparatus for context adaptive entropy encoding may include an encoder. In this example, the encoder can be configured to determine one or more initialization parameters for a context-adaptive entropy encoding process based on one or more initialization parameter index values. The encoder can be further configured to determine one or more initial context states for initializing one or more contexts of the context adaptive entropy encoding process based on one or more initialization parameters. The encoder can be further configured to initialize one or more contexts from the context adaptive entropy encoding process based on one or more initial context states.
[0011] In another example, a device for context adaptive entropy encoding may include means for determining one or more initialization parameters for a context adaptive entropy encoding process based on one or more parameter index values of startup. The device may further include means for determining one or more initial context states for initializing one or more contexts of the context adaptive entropy encoding process based on one or more initialization parameters. The device may further include means for initializing one or more contexts from the context adaptive entropy encoding process based on one or more of the initial context states.
[0012] The techniques described in this description can be implemented in hardware, software, firmware, or any combination thereof. If implemented in hardware, an apparatus can be created as one or more integrated circuits, one or more processors, discrete logic, or any combination thereof. If implemented in software, the software can run on one or more processors, such as one or more microprocessors, application-specific integrated circuits (ASICs), field-programmable gate assemblies (FPGAs), or digital signal processors (DSPs) . Software that performs the techniques may be initially stored on a non-transient or tangible computer-readable storage medium and loaded and run on one or more processors.
[0013] Accordingly, this description also contemplates a non-transient computer-readable storage medium having stored in it instructions that upon execution can cause one or more processors to perform context-adaptive entropy encoding. In this example, the instructions can cause one or more processors to determine one or more initialization parameters for a context-adaptive entropy encoding process based on one or more initialization parameter index values. The instructions may further cause one or more processors to determine one or more initial context states to initialize one or more contexts of the context adaptive entropy encoding process based on one or more initialization parameters. The instructions can further cause one or more processors to initialize one or more contexts of the context adaptive entropy encoding process based on one or more initial context states.
[0014] Details of one or more examples are presented in the attached drawings and in the description below. Other features, objectives and advantages will be apparent from the description and drawings and claims. Brief Description of Drawings
[0015] Figure 1 is a block diagram illustrating an example of a video encoding and decoding system that can implement the techniques for context state and probability initialization for context adaptive entropy encoding, consistent with the techniques of that description;
[0016] Figure 2 is a block diagram illustrating an example of a video encoder that can implement the techniques for context state and probability initialization for context adaptive entropy encoding, consistent with the techniques of this description;
[0017] Figure 3 is a block diagram illustrating an example of a video decoder that can implement the techniques for context state and probability initialization for context adaptive entropy encoding, consistent with the techniques of that description;
[0018] Figure 4 is a conceptual diagram illustrating an example of a temporal hierarchy of a video sequence encoded using scalable video encoding, consistent with the techniques of this description;
[0019] Figures 5 to 8 are flowcharts illustrating illustrative methods of initializing one or more contexts and probabilities of a context adaptive entropy encoding process, consistent with the techniques of this description. Detailed Description
[0020] This description describes techniques for encoding data such as video data. For example, the techniques can be used to encode video data, such as residual transform coefficients and/or other syntax elements, generated by video encoding processes. In particular, the description describes techniques that can promote efficient encoding of video data using context-adaptive entropy encoding processes. The description describes video encoding for illustration purposes only. As such, the techniques described in this description can be applicable to encoding other types of data.
[0021] In this description, the term "encoding" refers to encoding that takes place in an encoder or decoding that takes place in a decoder. Similarly, the term "encoder" refers to an encoder, a decoder, or a combined encoder/decoder (eg, "CODEC"). The term encoder, encoder, decoder and CODEC all refer to specific machines designed to encode (ie encode and/or decode) data, such as video data, consistent with this description.
[0022] As an example, the techniques in this description may allow an encoding system or device to encode various types of data, such as, for example, video data, more efficiently than when using other techniques. In particular, the techniques described here may allow the encoding system or device to be less complex than other systems or devices when encoding data using a context-adaptive entropy encoding process, such as, for example, a context adaptive binary arithmetic encoding process (CABAC). For example, the techniques can reduce an amount of information stored within the encoding system or device and/or transmitted to or from the encoding system or device, for purposes of initializing one or more contexts of the context adaptive entropy encoding process . As an example, the amount of information can be reduced by storing and/or transmitting initialization parameter index values that indicate the initialization parameters used to initialize the contexts, rather than storing and/or passing the initialization parameters directly .
[0023] Additionally, as another example, the techniques can improve data compression when the encoding system or device is configured to encode the data using the context-adaptive entropy encoding process. For example, the techniques can improve data compression by allowing the encoding system or device to initialize one or more contexts from the context-adaptive entropy encoding process, so that the contexts include relatively more accurate initial probabilities compared to the probabilities initials determined using other context initialization techniques. In particular, contexts can be initialized based on temporal layer information associated with the data, using reference context state and quantization parameter information and various relationships, or using a or more probability deviations. Additionally, the steps can further improve data compression by allowing the encoding system or device to subsequently update the context probabilities so that the updated probabilities are relatively more accurate compared to the updated probabilities using other updating techniques. context probability, using the same or similar techniques as described above.
[0024] Accordingly, there may be a relative bit saving for an encoded bit string that includes the encoded data, plus other syntax information (for example, the initialization parameter index values) transmitted to or from the system or encoding device, and a relative reduction in complexity of the system or encoding device used to encode the data, when using the techniques in that description.
[0025] The techniques of this description can, in some examples, be used with any context adaptive entropy coding methodology, including context adaptive variable length coding (CAVLC), CABAC, context adaptive binary arithmetic coding based on syntax (SBAC), Probability Range Partition Entropy Coding (PIPE), or other context-adaptive entropy coding methodology. CABAC is described here for illustrative purposes only and without limitation to the techniques amply described in this description. Furthermore, the techniques described here can be applied to encoding other types of data generally, for example, in addition to video data.
[0026] Figure 1 is a block diagram illustrating an example of a video encoding and decoding system that can implement the techniques for context state and probability initialization for context adaptive entropy encoding, consistent with the techniques of this description. As illustrated in Figure 1, the system 10 includes a source device 12 which generates encoded video data to be decoded at a future time by a destination device 14. The source device 12 and the destination device 14 may comprise any one of them. wide range of devices, including desktop computers, notebook (ie laptop), tablet computers, set-top boxes, telephone devices such as so-called "smartphones", so-called "smartpads", televisions, cameras, display devices, media devices digital video game consoles, video sequencing device, or the like. In some cases, source device 12 and destination device 14 may be equipped for wireless communication.
[0027] The destination device 14 may receive the encoded video data to be decoded via a connection 16. The connection 16 may comprise any type of medium or device capable of moving the encoded video data from the source device 12 to the device In one example, the connection 16 may comprise a communication means to allow the source device 12 to transmit the encoded video data directly to the destination device 14 in real time. The encoded video data can be modulated according to a communication standard, such as a wireless communication protocol, and transmitted to the target device 14. The communication medium can comprise any wireless or wired communication medium, such as such as a radio frequency (RF) spectrum or one or more physical transmission lines. The communication medium can form part of a packet-based network, such as a local area network, a wide area network, or a global network such as the Internet. The communication medium may include routers, switches, base stations, or any other equipment that may be useful to facilitate communication from source device 12 to destination device 14.
[0028] Alternatively, the encoded video data can be sent from the output interface 22 to a storage device 24. Similarly, the encoded video data can be accessed from the storage device 24 by the input interface 26. Storage device 24 may include any of a variety of locally accessed or distributed data storage media such as a hard disk, Blu-ray discs, DVDs, CD-ROMs, flash memory, volatile or non-volatile memory or any other digital storage medium suitable for storing encoded video data. In a further example, the storage device 24 may correspond to a file server or other intermediate storage device that can hold the encoded video generated by the source device 12. The destination device 14 may access the video data stored from the storage device. 24 storage via sequencing or downloading. The file server can be any type of server capable of storing the encoded video data and transmitting that encoded video data to the target device 14. Illustrative file servers include a web server (e.g., to a website ), an FTP server, network attached storage devices (NAS), or a local disk drive. The target device 14 can access the encoded video data over any standard data connection, including an Internet connection. This can include a wireless channel (eg a WiFi connection), a wired connection (eg DSL, cable modem, etc.), or a combination of both that is suitable for accessing the encoded video data stored in a file server. The transmission of encoded video data from storage device 24 may be a sequenced transmission, a download transmission, or a combination of both.
[0029] The techniques in this description are not necessarily limited to wireless applications or configurations. The techniques can be applied to video encoding in support of any of a variety of multimedia applications, such as over-the-air television broadcasts, cable television broadcasts, satellite television broadcasts, video broadcast sequencing, by example, via the Internet, encoding digital video for storage on a data storage medium, decoding digital video stored on a data storage medium, or other applications. In some examples, system 10 may be configured to support one-way or two-way video transmission to support applications such as video sequencing, video playback, video broadcasting, and/or video telephony.
[0030] In the example of Figure 1, the source device 12 includes a video source 18, video encoder 20 and an output interface 22. In some cases, the output interface 22 may include a modulator/demodulator (modem) and /or a transmitter. In source device 12, video source 18 may include a source such as a video capture device, e.g., a video camera, a video file containing previously captured video, a video feed interface for receiving video from a video content provider, and/or a computer graphics system for generating computer graphic data such as video source, or a combination of such sources. As an example, if the video source 18 is a video camera, the source device 12 and the target device 14 can form so-called camera phones or videophones. However, the techniques described in this description can be applicable to video encoding in general, and can be applied to wireless and/or wired applications.
[0031] The captured, pre-captured, or computer generated video can be encoded by the video encoder 20. The encoded video data can be transmitted directly to the target device 14 through the output interface 22 of the source device 12. Encoded video data may also (or alternatively) be stored on storage device 24 for later access by target device 14 or other devices, for decoding and/or playback.
[0032] The destination device 14 includes an input interface 26, a video decoder 30, and a display device 28. In some cases, the input interface 26 may include a receiver and/or a modem. Input interface 26 of target device 14 receives the encoded video data via connection 16, or from storage device 24. The encoded video data communicated via connection 16, or provided to storage device 24, may include a variety of syntax elements generated by video encoder 20 for use by a video decoder, such as video decoder 30, in decoding video data. Such syntax elements may be included with encoded video data transmitted on a communication medium, stored on a storage medium, or stored on a file server.
[0033] The display device 28 may be integrated with, or may be external to, the target device 14. In some examples, the target device 14 may include an integrated display device, such as, for example, the display device. 28, and/or configured to interface with an external display device. In other examples, the target device 14 may itself be a display device. In general, the display device 28 displays the decoded video data to a user, and may comprise any of a variety of display devices, such as a liquid crystal display (LCD), a plasma display, a computer monitor. organic light-emitting diode (OLED), or other type of display device.
[0034] The video encoder 20 and the video decoder 30 can operate in accordance with a video compression standard, such as the High Efficiency Video Coding (HEVC) standard currently under development by the Collaborative Team Coding Set. Video (JCT-VC) from the ITU-T Video Coding Expert Group (VCEG) and ISO/IEC Moving Image Expert Group (MPEG), and can conform to the HEVC Test Model (HM). Alternatively, video encoder 20 and video decoder 30 may operate in accordance with other proprietary or industry standards, such as the ITU-T H.264 standard, alternatively referred to as MPEG-4, Part 10, Advanced Video Encoding ( AVC) (hereinafter H.264/AVC), or extensions of such standards. The techniques in this description, however, are not limited to any particular encoding standard. Other examples of video compression standards include MPEG-2 and ITU-T/H.263. A recent draft of the HEVC standard, referred to as a "HEVC Working Draft 8" or "WD8", is described in JCTVC-J1003_d7, Bross et al., "High Efficiency Video Coding (HEVC) Text Specification Draft 8", Time collaborative video coding suite (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11, 10 Meeting, Stockholm, SE, 11-20 July 2012.
[0035] Although not illustrated in Figure 1, in some respects the video encoder 20 and video decoder 30 may each be integrated with an audio encoder and decoder, and may include suitable MUX-DEMUX units, or other hardware and software, to handle encoding both audio and video into a common data stream or separate data streams. If applicable, in some examples the MUX-DEMUX units can conform to the ITU H.223 multiplexer protocol, or other protocols such as the User Datagram Protocol (UDP).
[0036] Video encoder 20 and video decoder 30 may each be implemented as any one of a variety of suitable encoder or decoder circuitry, such as one or more microprocessors, digital signal processors (DSPs) , application-specific integrated circuits (ASICs), field-programmable gate assemblies (FPGAs), discrete logic, software, hardware, firmware, or any combination thereof. When the techniques are partially implemented in software, a device may store instructions for software on a suitable non-transient computer-readable storage medium and execute the instructions in hardware using one or more processors to perform the techniques in that description. Each video encoder 20 and video decoder 30 can be included in one or more encoders or decoders, any of which can be integrated as part of a combined encoder/decoder (e.g., CODEC) in a respective device.
[0037] HEVC standardization efforts are based on an evolving model of a video encoding device referred to as the HEVC Test Model (HM). HM assumes several additional capabilities of video encoding devices over existing devices according to, for example, ITU-T H.264/AVC. For example, whereas H.264 provides nine intra-predict encoding modes, HM can provide as many as thirty-five intra-predict encoding modes.
[0038] In general, the HM working model describes that a video or image frame can be divided into a sequence of tree blocks or larger encoding units (LCU) that include both luminescence and chrominance samples. A tree block has a similar purpose to an H.264 standard macroblock. A slice includes a number of consecutive tree blocks in coding order. A video frame or image can be divided into one or more slices. Each tree block can be divided into coding units (Cus) according to a quadtree. For example, a tree block, such as a quadtree root node, can be split into four child nodes, and each child node can in turn be a parent node and be split into four other child nodes. A final undivided child node, such as a quadtree leaf node, comprises an encoding node, that is, an encoded block of video. Syntax data associated with an encoded bit stream can define a maximum number of times a tree block can be split, and can also define a minimum size of the encoding nodes.
[0039] A CU includes an encoding node and prediction units (PUs) and transformation units (TUs) associated with the encoding node. A CU size corresponds to an encoding node size and can be square in shape. CU size can range from 8 x 8 pixels up to tree block size with a maximum of 64 x 64 pixels or larger. Each CU can contain one or more PUs and one or more TUs. Syntax data associated with a CU can describe, for example, the partition of the CU into one or more PUs. Partition modes can differ between whether the CU is coded in jump or direct mode, coded in intrapredict mode, or coded in interpredict mode. PUs can be split to have a non-square shape. Syntax data associated with a CU can also describe, for example, partitioning the CU into one or more TUs according to a quadtree. A TU can be square or non-square in shape.
[0040] The HEVC standard performs the transformations according to the TUs, which can be different from the CUs. TUs are typically sized based on the size of PUs within a given CU defined by a split LCU, although this may not always be the case. TUs are typically the same size or smaller than PUs. In some examples, residual samples corresponding to a CU can be subdivided into smaller units using a quadtree structure known as a "residual quadtree" (RQT). RQT leaf nodes can be referred to as transformation units (TUs). Pixel difference values associated with TUs can be transformed to produce transformation coefficients, which can be quantized.
[0041] In general, a PU includes data related to the forecasting process. For example, when the PU is encoded intramode, the PU can include data describing an intrapredictive mode for the PU. As another example, when the PU is encoded intermode, the avPU can include data defining a motion vector for the PU. Data defining the motion vector for a PU can describe, for example, a horizontal motion vector component, a vertical motion vector component, a motion vector resolution (for example, a precision of a quarter of pixel or a precision of one-eighth of a pixel), a reference image to which the motion vector points, and/or a reference image list (for example, List 0, List 1, or List C) for the motion vector .
[0042] In general, a TU is used for the transformation and quantization processes. A given CU having one or more PUs can also include one or more TUs. Following the prediction, the video encoder 20 can calculate residual values corresponding to the PU. Residual values comprise pixel difference values that can be transformed into transformation coefficients, quantized, and digitized using the TUs to produce the serialized transformation coefficients for entropy encoding. This description typically uses the term "video block" to refer to an encoding node of a CU. In some specific cases, this description may also use the term "video block" to refer to a tree block, ie, LCU or a CU that includes an encoding node and PUs and TUs.
[0043] A video sequence typically includes a series of video frames or images. A group of images (GOP) usually comprises a series of one or more video images. A GOP can include syntax data in a GOP header, a header of one or more images, or elsewhere that describe a number of images included in the GOP. Each slice of an image can include slice syntax data that describes an encoding mode for the respective slice. Video encoder 20 typically operates on video blocks within individual video slices in order to encode video data. A video block can correspond to an encoding node within a CU. Video blocks can be fixed or variable sizes and can differ in size according to a specified encoding standard.
[0044] As an example, HM supports prediction in various PU sizes. Assuming the size of a particular CU is 2N x 2N, HM supports intraprediction on PU sizes of 2N x 2N, or N x N and interprediction on symmetric PU sizes of 2N x 2N, 2N x N, N x 2N, or N x N. HM also supports asymmetric partitioning for interpredicting on PU sizes of 2N x nU, 2N x nD, nL x 2N, and nR x 2N. In asymmetric partitioning, one direction of a CU is not split, while the other direction is split into 25% and 75%. The portion of the CU corresponding to the 25% partition is indicated by an "n" followed by an indication of "up", "down", "left", or "right". So, for example, "2N x U" refers to a 2N x 2N CU that is split horizontally with a 2N x 0.5N PU on top and a 2N x 1.5N PU on the bottom.
[0045] In that description, "N x N" or "N by N" may be used interchangeably to refer to the pixel dimensions of a video block in terms of vertical and horizontal dimensions, eg 16 x 16 pixels or 16 by 16 pixels. In general, a 16x16 block will have 16 pixels in a vertical direction (y = 16) and 16 pixels in a horizontal direction (x = 16). Likewise, an N x N block will typically have N pixels in a vertical direction and N pixels in a horizontal direction, where N represents a non-negative integer value. Pixels in a block can be arranged in rows and columns. Furthermore, blocks do not necessarily need to have the same number of pixels in the horizontal direction as in the vertical direction. For example, blocks can comprise N x M pixels, where M is not necessarily equal to N.
[0046] Following intrapredictable or interpredictable coding using the PUs of a CU, the video encoder 20 can calculate the residual data for the TUs of the CU. PUs can comprise pixel data in the spatial domain (also referred to as "pixel domain") and TUs can comprise coefficients in the transformation domain following the application of a transformation, e.g., a discrete cosine transformation (DCT), a integer transform, a wavelet transform, or a conceptually similar transform, to residual video data. Residual data can correspond to pixel differences between uncoded image pixels and prediction values corresponding to PUs. Video encoder 20 can form the TUs including the residual data for the CU, and then transform the TUs to produce the transform coefficients for the CU.
[0047] Following any transformations to produce transform coefficients, the video encoder 20 can perform quantization of the transform coefficients. Quantization generally refers to a process in which transformation coefficients are quantized to possibly reduce the amount of data used to represent the coefficients, providing additional compression. The quantization process can reduce the bit depth associated with some or all of the coefficients. For example, a bit value n can be rounded down to a bit value m during quantization, where n is greater than m.
[0048] In some examples, the video encoder 20 may use one or more predefined digitizing orders to digitize the quantized transformation coefficients to produce a serialized vector that can be encoded by entropy. Predefined scan orders can vary based on factors such as the encoding mode or size or transformation format used in the encoding process. Additionally, in other examples, the video encoder 20 may perform adaptive digitizing, for example, using a digitizing order that is periodically adapted. Scan order may adapt differently for different blocks, eg based on encoding mode or other factors. In either case, after digitizing the quantized transformation coefficients to form the serialized "one-dimensional" vector, the video encoder 20 can further entropy encode the one-dimensional vector, for example, according to CAVLC, CABAC, SBAC, PIPE, or another context-adaptive entropy coding methodology. Video encoder 20 may also entropy encode other syntax elements associated with the encoded video data for use by video decoder 30 in decoding video data. Additionally, video decoder 30 may perform the same or other similar context adaptive entropy encoding techniques as video encoder 20 to decode the encoded video data and any additional syntax elements associated with the video data.
[0049] As an example, to perform CABAC, the video encoder 20 can designate a context within a context template for a symbol to be transmitted. Context can refer to, for example, whether the symbol's neighboring values are non-zero or not. As another example, to perform CAVLC, video encoder 20 can select a variable length code for a symbol to be transmitted. Codewords in CAVLC, and variable-length encoding, generally, can be constructed so that relatively shorter codes correspond to more likely symbols, while relatively longer codes correspond to less likely symbols. In this way, the use of CAVLC can achieve bit savings over, for example, using half length codewords for each symbol to be transmitted. Probability determination can be based on a designated context for the symbol. Additionally, the techniques described above are equally applicable to the video decoder 30 used to decode one or more symbols encoded by the video encoder 20 in the manner described above.
[0050] In general, according to the techniques of H.264/AVC and certain draft versions of HEVC described above, encoding a data symbol (for example, a syntax element, or a part of an element of syntax, for an encoded block of video data) using CABAC may involve the following steps: (1) Binarization: If a symbol to be encoded has a non-binary value, it is mapped to a sequence of so-called "compartments". Each bin can have a value of "0" or "1". (2) Context Assignment: Each compartment (eg in a so-called "regular" coding mode) is assigned to a context. A context model determines how a context is calculated for (e.g., assigned to) a particular bin based on information available for the bin, such as pre-encoded symbol values, or number of bins (e.g., a bin position within a sequence of compartments that includes the compartment). (3) Compartment Encoding: Compartments are encoded with an arithmetic encoder. To encode a given bin, the arithmetic coder requires as an input a probability (eg, estimated probability) of a bin value, that is, the probability that the bin value equals "0", and the probability of the bin value equals "1". For example, a context assigned to the compartment, as described above in step (2), might indicate this compartment value probability. As an example, a probability of each context (for example, an estimated probability indicated by each context) can be represented by an integer value associated with the context called the context's "state". Each context has a context state (for example, a particular context state at any given time). As such, a context state (that is, an estimated probability) is equal for compartments assigned to a context and differs between contexts (eg, varies between different contexts, and in some cases, for a context determined with the time). Additionally, to encode the bin, the arithmetic encoder additionally requires as an input the bin value, as described above. (4) State Update: The probability (eg context state) for a selected context is updated based on the actual encoded value of the compartment. For example, if the bin value equals "1", the probability of "1s" is increased, and if the bin value equals "0", the probability of "0s" is increased, for the selected context .
[0051] Many aspects of this description are specifically described in the context of CABAC. Additionally, PIPE, CAVLC, SBAC or other context-adaptive entropy coding techniques may use principles similar to those described herein with reference to CABAC. In particular, these and other context-adaptive entropy coding techniques can use context-state initialization, and can therefore also benefit from the techniques in this description.
[0052] Additionally, as described above, the CABAC techniques of H.264/AVC include the use of context states, where each context state is implicitly related to a probability. There are variations of CABAC where a probability (eg "0" or "1") of a given symbol being encoded is used directly, that is, the probability (or an entire version of the probability) is the context state itself, as will be described in more detail below.
[0053] Prior to the initiation of a CABAC encoding or decoding process, an initial context state may need to be assigned for each context in the CABAC process. In H.264/AVC, and certain draft versions of HEVC, a linear relationship or "model" is used to designate the initial context states for each context. Specifically, for each context, they require predefined initialization parameters, slopes ("m") and intersections ("n"), used to determine the initial context state for the context. For example, according to H.264/AVC and certain draft versions of HEVC, an initial context state for a given context can be derived using the following relationships:

[0054] In equation 1, "m" and "n" correspond to the initialization parameters for the context being initialized, (that is, for the initial context state "iInitState" being determined for the context). Additionally, "iQP", which may be referred to as an initialization quantization (QP) parameter, may correspond to a QP for the data (e.g., a block of video data) being encoded. The QP value for the data, and thus the iQP value, can be set, for example, frame by frame, slice by slice, or block by block. Additionally, the values of initialization parameters "m" and "n" may vary for different contexts. Additionally, equation 2 can be referred to as a "clipping" function, which can be used to ensure that the value of "iInitState" varies between "1" and "126", thus allowing the value to be represented using 7 bits of data.
[0055] In some examples, "iInitState" can be further converted to an actual context state of the context in CABAC, plus a "most likely symbol (MPS)/least likely symbol (LPS)" symbol, using the following expressions:


[0056] In some CABAC examples, in cases where a context state for a context directly corresponds to a context probability, as described above, the following relationships can be used to initialize a particular context:

[0057] The value "iP0" may indicate a probability of a symbol being encoded, as indicated directly by the context state "c" for a given context. Accordingly, in this example, there is no need to convert the probability of symbol "iP0" into MPS and LPS symbols and an actual context state as described above. Furthermore, as illustrated, the relationship, or "model" of equation 3 is also linear, and is based on two initialization parameters, that is, "asCtxInit[0]" and "asCtxInit[1]". For example, "iQP" can once again correspond to QP for the data being encoded. Additionally, "iQPreper" can correspond to a constant, such as an offset, used to modify iQp in some examples.
[0058] In the example described above, the probability that the symbol iP0 is expressed as an integer using 15 bits of data, where a non-zero minimum probability is "1", and a maximum probability is "32767". In this example, a "real" probability is derived using the expression "ip0/32768". Additionally, equation 4 can also be referred to as a "clipping" function and can be used to ensure that the value of "iP0" varies between "1" and "32767", thus allowing the value to be represented using 15 bits of data.
[0059] The approach described above has several disadvantages. As an example, since the CABAC process described above with reference to H.264/AVC and certain draft versions of HEVC include a significant number of contexts (eg as many as 369 contexts), each context can be initialized using a set particular, or "pair" of boot parameters "m" and "n". As a result, a significant number of initialization parameters "m" and "n" (eg as many as 369 different pairs of initialization parameters "m" and "n") can be used to determine the initial context states for the contexts. Furthermore, since each of the initialization parameters "m" and "n" can be represented using as many as 8 bits of data, a significant amount of information (eg a number of data bits) can be displayed for store and/or pass initialization parameters "m" and "n" for the purpose of determining the initial context states for the contexts. For example, as many as 5,904 bits of data may be needed to store and/or transmit 369 different pairs of initialization parameters "m" and "n", each comprising 16 bits of data (that is, each of the initialization parameters "m" and "n" of a particular pair comprising 8 data bits).
[0060] Additionally, as another example, the linear relationship used to determine the initial context states for the contexts, is also described above with reference to H.264/AVC and certain draft versions of HEVC, can result in the determination of probabilities context initials, as indicated by initial context states, which are relatively less accurate than initial probabilities determined using other techniques. As an example, using the linear relationship described above can result in initial probabilities being relatively less accurate than the initial probabilities determined using a linear relationship that additionally takes into account a temporal layer associated with the data (eg, video data ) being encoded. As another example, using the linear relationship described above may result in the initial probabilities being relatively less accurate than the initial probabilities determined using a non-linear relationship, a partially non-linear relationship, or a bilinear relationship. As another example, in cases where the initial probabilities of the contexts are determined directly (that is, instead of determining initial context states that indicate the initial probabilities of the contexts), the initial probabilities may be relatively less accurate (eg, biased) compared to the initial probabilities which are further adjusted based on their proximity to one or more of an upper bound and a lower bound of a probability range that includes the initial probabilities.
[0061] This description describes various techniques that can, in some cases, reduce or eliminate some of the disadvantages described above with reference to context state initialization (that is, determining the initial context states for the contexts, where the context states initial probabilities indicate the initial probabilities of the contexts), and probability initialization (that is, directly determining the initial probabilities of the contexts) of a context adaptive entropy encoding process. In particular, the techniques described here may allow context adaptive entropy encoding systems or devices (e.g., CABAC, CAVLC, SBAC, PIPE, etc.) used to encode data, such as, for example, video data, have less complexity than other systems or devices. As an example, the techniques in this description may allow systems or devices to store and/or transmit boot parameter index values that indicate the "m" and "n" boot parameters described above, which are used to determine states from initial context to contexts of a context-adaptive entropy encoding process, rather than storing and/or passing initialization parameters directly. In this example, boot parameter index values can be represented using less information (eg, fewer bits of data) than boot parameters, possibly resulting in a reduced amount of information stored within systems or devices, and, in some cases, transmitted from systems to other systems or devices.
[0062] Additionally, the techniques described here can allow more efficient context adaptive entropy encoding of data, such as, for example, video data, by initializing one or more contexts of a context adaptive entropy encoding process so that the contexts' initial probabilities are more accurate with respect to the initial probabilities derived using other techniques. In one example, the techniques of this description may allow context initialization to have relatively more accurate initial probabilities by determining initial context states, which are indicative of initial probabilities, for contexts, based on a temporal layer associated with the Dice. In another example, the techniques can allow the initialization of one or more contexts by determining the initial context states for the contexts using the corresponding reference context states and reference quantization parameter values. In another example, in cases where the contexts' initial probabilities are directly determined, the techniques may allow the determination of the initial probabilities based on one or more probability deviations.
[0063] As an example, this description describes techniques for determining one or more initialization parameters for a context adaptive entropy encoding process based on one or more initialization parameter index values determining one or more states of initial context for initializing one or more context adaptive entropy encoding process contexts based on one or more initialization parameters, and initializing one or more context adaptive entropy encoding process contexts based on one or more states of initial context.
[0064] For example, the development of the techniques in this description has demonstrated that, in some cases, the use of the linear relationship between the initial context and QP states of the data (eg, video data) being encoded, as described above with reference to H.264/AVC and HEVC, may result in relatively less accurate initial probabilities of contexts compared to using other techniques. As a result, the initial probabilities indicated by the initial context states can be substantially different from the actual probabilities of the data being encoded. Accordingly, as will be described in more detail below, this description proposes various methods of generating, or determining initialization values (ie, initial context states, or initial probabilities directly) for contexts to improve the accuracy of so-called "estimates probability/state" (ie, probabilities) of the contexts. Additionally, this description also proposes techniques for reducing a bitwidth of the linear model initialization parameters (ie, "m" and "n") described above, such that storage for an initialization table (eg , a table size) that includes the initialization information for the contexts (for example, the initialization parameters "m" and "n") can be reduced.
[0065] For example in HM, the initialization parameters "m" and "n" described above are stored using 16-bit signed integers. As such, 16 bits of data are used for storing each initialization parameter "m" and "n". Whereas H.264/AVC techniques and certain HEVC draft versions can include as many as 369 contexts, as much as 369 sets, or "pairs" and initialization parameters "m" and "n" (eg, 369 pairs "(m,n)") can be stored within a particular encoder, thus consuming a substantially large amount of memory, or storage.
[0066] In some examples, this description describes the use of 4 data bits for each of the initialization parameters "m" and "n". To cover a sufficiently large range of slope ("m") and intersection ("n") values, instead of directly using "m" to represent slope, and "n" to represent intersection, the techniques described propose the use of "m" to represent an index of slope and "n" to represent an index of intersection. In this example, an actual slope value can be derived using the initialization parameter index value "m" using the following relationship (that is, using a slope table): Slope = SlopeTable[m]
[0067] Similarly, an actual intersection value can be derived using the initialization parameter index value "n" using the following relationship (that is, using an intersection table): Intersection = IntersectionTable[n ]
[0068] In other words, according to the techniques described, the initialization parameters "m" and "n" described above with reference to H.264/AVC and HEVC can be reset as initialization parameter index values "m " and "n" which, in turn, indicate the initialization parameters (which can be referred to simply as the "slope" and "intersection" initialization parameters). In other examples, however, the "m" and "n" initialization parameters of H.264/AVC and HEVC may retain their original meaning, while the initialization parameter index values of the techniques described here may be referred to as values of initialization parameter index "idx_m" and "idx_n". In the following examples, initialization parameter index values are referred to as initialization parameter index values "m" and "n".
[0069] An example of a slope table and an intersection table that includes the slope and intersection values, respectively, determined using the initialization parameter index values "m" and "n" are illustrated below: SlopeTable[16 ] = {-46, -39, -33, -28, -21, -14, - 11, -6, 0, 7, 12, 17, 21, 26, 34, 40} IntersectionTable[16] = {- 42, -25, -11, 0, 9, 20, 32, 43, 54, 63, 71, 79, 92, 104, 114, 132}
[0070] In some examples, initialization parameter index values can be combined using an 8-bit parameter "x", where "m = x >>4 and n = x&15," or vice versa, "n = x >> 4 in = x&15," can be used to derive "m" and "n" using "x". In this example, ">>" indicates a right shift operation and "&" indicates a logical AND operation.
[0071] In other examples, it is also possible to use different numbers of bits to represent the initialization parameter index values "m" and "n". For example, 5 data bits can be used to represent "m" and 3 data bits can be used to represent "n", or vice versa.
[0072] In still other examples, instead of storing a table of slope and intersection values as described above, the slope and intersection values can be calculated from the corresponding slope or intersection index value using one or more formulas , or functions, such as following the slope and/or intersection functions: Slope = functionA(m) and/or Interception = functionB(n)
[0073] Using the slope as an example, the slope function can be a linear function, such as, for example, the following expression: Slope = c0*m + c1 where "c0" and "c1" are parameters of the linear function.
[0074] In another example, the slope function can include only change and add operations, such as, for example, the following expression:
where "k" is a change parameter and "c1" is a constant.
[0075] As an additional example, for the following tables: SlopeTable[16] = {-45, -40, -35, -30, -25, -20, -15, -10, -5, 0, 5 , 10, 15, 20, 25, 30} IntersectionTable[16] = {-16, -8, 0, 8, 16, 24, 32, 40, 48, 56, 64, 72, 80, 88, 96, 104 }, the following relationships, including "x", can be used to determine initialization parameter index values "m" and "n", which in turn can be used to determine slope and intersection values respective using the tables:
Again, in this example, instead of storing SlopeTable and IntersectionTable as described above, the slope and intersection values can be calculated using the following expressions:

[0077] In other examples, the 4 bits of parameter "m" and the 4 bits of parameter "n" can be combined into an 8-bit parameter "idx", where instead of using two separate index values for if determining slope and intersection values, a single index value (ie, "idx") can be used, as illustrated in the following expressions:


[0078] As another example, this description also describes techniques for determining one or more initial context states for initializing one or more contexts of a context adaptive entropy encoding process used to encode video data based on one or more initialization parameters and a temporal layer parameter associated with the video data, and initializing one or more contexts of the context adaptive entropy encoding process based on one or more initial context states.
[0079] As an example, as illustrated in Figure 4, and as described in more detail below, a frame of video data can be encoded in a hierarchical structure. For example, as shown in Figure 4, frames "0", "4" and "8" are encoded in temporal layer "0", frames "2" and 6" are encoded in temporal layer "1", and remaining frames (ie, "1", "3", "5" and "7" frames) are encoded in temporal layer "2". can be asymmetric. For example, video data frames located in lower temporal layers can be reference frames of video data frames located in higher temporal layers (for example, as illustrated by the arrows shown in Figure 4). also illustrated in figure 4, such dependencies in directions that are reversed from those illustrated in figure 4 may not be allowed. As a result, the initial probabilities of the contexts of a context adaptive entropy encoding process used to encode one or more d frames Video data may vary depending on a temporal layer associated with the video data frames.
[0080] Accordingly, this description describes the techniques for adding an offset to an initial context state derived using the linear relationship described above with reference to H.264/AVC and HEVC, for example, using the following relationships:

[0081] In one example, an "offset" value may be fixed and dependent on a temporal layer of a current slice associated with the data being encoded (eg a video data frame). For example, the value of "offset" can be set to "-2" for temporal layer "0", set to "2" for temporal layer "1", set to "3" for temporal layer "2" , and set to "4" for the temporal layer "3". In another example, "offset" might be a function of the temporal layer, for example, as illustrated in the following expression: offset = offset_base*(temporal_layer-c0) + c1 where "offset_base" corresponds to a base offset value, " temporal_layer" corresponds to the temporal layer associated with the encoded data, "c0 and c1" correspond to the constants, and "offset" corresponds to the resulting offset used in the linear context initialization relationship described above.
[0082] In another example, the deviation can be used as illustrated in the following relationships:

[0083] In some cases, the "offset" value may also be derived from other "side" information associated with the encoded data, such as, for example, slice type, frame resolution, reference frame list size , etc.
[0084] In another example, the value of "offset" can be signaled in high-level syntax, such as, for example, an image parameter set (PPS), a sequence parameter set (SPS), a set Parameter Adaptation (APS), or other syntax information, associated with the data, for example, another set of parameters or high-level syntax location.
[0085] In the examples above, there may be a single "offset" value for all contexts, or there may be multiple "offset" values where each value is applied to a particular subset of contexts. In one example, contexts can be divided into three groups (ie, G1, G2 and G3) and use a single "offset" value, as illustrated by the expressions below:
[0086] For G1,

[0087] For G2,

[0088] For G3,

[0089] As another example, an adjustment of the initial context state can also be achieved by adjusting a QP, eg iQP, in the linear relationship described above with reference to H.264/AVC and HEVC. For example, a new parameter "iQp_new" can be used to calculate the initial context state, where "iQp_new" can be different from QP used to encode the video data of a particular frame (eg the frame for which the initial context state is determined), as illustrated in the following relationships:

[0090] In another example, a new parameter "QP_offset" can be used to modify a QP, for example, iQP, in the linear relationship described above with reference to H.264/AVC and HEVC, as illustrated in the following relationships:

[0091] In another example, the value of "iQp_new" or "Qp_offset" may be signaled in high-level syntax, such as, for example, PPS, SPS, APS, or another set of parameters or high-level syntax location.
[0092] In the example described above, there can be a single value of "iQp_new" or "Qp_offset" for all contexts, or multiple values of "iQp_new" or "Qp_offset", where each value is applied to a particular subset of contexts.
[0093] In an example, the value of "Qp_offset" and/or "iQp_new" can be fixed and dependent on the temporal layer of the current slice associated with the encoded data. For example, "Qp_offset" can be set to "-3" for temporal layer "0", set to "0" for temporal layer "1", set to "3" for temporal layer "2" and set to "6" for temporal layer "3". In another example, "Qp_offset" might be a function of the temporal layer, for example, as illustrated in the following relationship: QP_offset = Qp_offset_base*(temporal_layer-c0) + c1 where "Qp_offset_base,""c0", and "c1"are constants serving as parameters of the relation. Similarly, the value of "Qp_offset" and/or "iQp_new" can also be derived from other "side" information such as, for example, slice type, frame resolution, reference frame list size, etc. .
[0094] As another example, the techniques described above, including techniques that refer to the initialization of one or more contexts of a context adaptive entropy encoding process used to encode data based on one or more parameter index values of initialization, can be used for all contexts of the context-adaptive entropy encoding process, or for just a few (for example, a subset) of the contexts. For example, techniques can be used for contexts related to certain syntax element types, eg syntax elements for selected color components (eg "glow" or "chrominance" components), block sizes selected, selected transform sizes, motion information, or transform coefficient information.
[0095] As another example, this description describes the techniques for determining a first value, in case the first value is within a range of values defined by a lower limit, an upper limit, and one or more deviations relative to one or more between the lower limit and the upper limit, selecting the first value, in case the first value is outside the range of values, selecting a second value, where the second value is different from the first value, and initializing a probability of a context of a context adaptive entropy coding process based on the selected first or second value.
[0096] As an example, in determining the initial probabilities of contexts, the techniques described above with reference to versions of CABAC where a context state for a context directly corresponds to a context probability, highly skewed distributions of the contexts initial probabilities can to occur. For example, highly skewed probabilities can result from determined initial probabilities being close to each other of an upper bound and a lower bound of a probability range that includes the initial probabilities. As a result, the techniques in this description propose to introduce one or more deviations that reduce, or avoid, such biased probabilities. For example, the proposed techniques can be performed using the following relationship: iP0 = min(max(1 + offset, c), 32767 - offset) where "offset" is an integer value. As an example, an "offset" value of "256" can allow for initial probabilities without highly biased values.
[0097] As another example, the value of "offset" can be "combined" with a probability update process. In other words, in some examples, the same or a similar deviation can be used for the purpose of subsequently updating the initialized probabilities for the contexts. Accordingly, this update process can also result in the avoidance of "extreme" probabilities (eg, close to 0% or 100%) of contexts (ie, highly skewed probabilities). As a result, both the initialization and subsequent update of probabilities (ie, probability initialization process and probability update process described above) can impose the same limits on the extreme probabilities of contexts, possibly thus avoiding highly skewed probabilities of the contexts. As an example, the probability update process can be performed using the following relationships:
where ALPHA0 is a constant.
[0098] In these examples, the functions, or relations, illustrated above can be referred to as "memory reduction exponential" functions. For example, an asymptotic value of a particular exponential function (ie, a lowest or highest possible value) is governed by the "deviation" value. For example, the value of "bypass" can be the same for both the boot and update processes described above.
[0099] As another example, the relationship described previously Int c = asCtxInit[0] + asCtxInit[1]*(iQp - iQPreper) can give the probability of a value of u symbol (eg a compartment) being "0" , without giving the probability that the symbol value is "1".
[0100] The following initialization function, or relation, can be used to obtain the probability that the symbol value is "1", as an example: iP0 = 32768 - min(max(1+offset, c), 32767 - offset) where the meaning of the probability is inverted. In other words, the above relationship gives a value of "1" (that is, a probability of "1", or 100%) minus the probability derived using the relationship described above: iP0 = min(max(1+offset) , c), 32767 - offset)
[0101] Additionally, in this example, "32768" may be a maximum probability, which may be equivalent to a probability of "1" (that is, a probability of 100%).
[0102] Additionally, this description also describes techniques for determining an initial context state for initializing a context of a context adaptive entropy encoding process used to encode video data based on an initialization parameter that defines three or more context reference states, each corresponding to a respective value among the three or more reference QP values, and a QP value associated with the video data, and context-adaptive entropy encoding process context initialization with based on the initial context state.
[0103] For example, as explained above, in H.264/AVC and given draft versions of HEVC, an initial context state for a context is determined based on a linear relationship derivation method. The method takes two initialization parameters (that is, "m" and "n"), each of which is represented using at least 8 bits of data. The linear relationship, or equation, uses these two initialization parameters to derive one of, for example, 126 context states allowed in H.264/AVC, as the initial context state for the context.
[0104] The development of the techniques of this description has demonstrated that nonlinear models, or relationships, can be more efficient than linear relationships, such as the linear relationship described above with reference to H.264/AVC and HEVC, to initialize contexts. In particular, non-linear relationships can result in relatively more accurate initial probabilities of contexts compared to initial probabilities determined using linear relationships. Accordingly, this description proposes the use of a non-linear, or partially non-linear, method or relationship to determine an initial context state for a context, for example, using a limited number of data bits. In some examples, the techniques propose to use the same number of data bits, or fewer data bits, compared to the number of data bits used in H.264/AVC and HEVC techniques described above, that is, 16 data bits or less.
[0105] As an example, 16 bits of data can be used to determine an initial context state for a context. Using 16 bits can be divided into three parts. A first part can include 6 bits, providing the context state at a given QP value (for example, QP = "26"). It should be noted that this context state value is quantized so that 2 contiguous context states share the same quantized context state (for example, since a bit depth of 6 bits provides 64 indices that need to signal one of the 126 context states). A second part can include 5 bits, providing the context state in a second QP value (for example, the previous QP minus "8"). Again, this can be a quantized context state, as a bit depth of 5 bits provides 32 indices that need to signal one of the 126 context states. In this example, 4 context states share the same quantized context state. Finally, a 16-bit third part can include the remaining 5 bits that indicate the context state in a third QP value (eg, the first QP plus "8").
[0106] As a result of this, the initialization parameter of this example can include 16 bits of data, for example, InitParam = [x1 x2 x3]. In this example, "x3" can be obtained using an operation "x3 = (InitParam*31)". Similarly, "x2" can be obtained using the operation "x2 = ((InitParam>>5)&31)," and "x1" can be obtained using the operation "x1 = (InitParam>>10) ." Thus, the parameter "InitParam" contains the parameters necessary for the derivation of the initial context state. Again, in this example, ">>" indicates a right shift operation, and "&" indicates a logical AND operation.
[0107] These three values (ie "x1", "x2" and "x3"), using a total of 16 bits of data, of the quantized context states give three points (eg "pairs" of values, each pair including one of "x1", "x2" and x3" and a corresponding QP value) that can be used for interpolation from the context state to the context even to the rest of the QP values. In other words, the state values of reference context "x1", "x2", "x3", and the corresponding reference QP values, can be used to determine an initial context state for a context by interpolating between the reference values, and using an associated real QP with the data being encoded to determine the initial context state for the context.
[0108] As an example, the determination described above can be performed using a double linear approximation (eg joins). For example, the following relationships can be used:

[0109] In this example, "xi", "x2", and "x3" contain the context state values in three different QPs (ie, "26", "i8", and "34", respectively). Additionally, if the variables "x" (that is, "xi", "x2" and "x3") do not contain values in the correct bit depths, as explained above, performing some left shift bit operations may be necessary.
[0110] Additionally, a division by "8" can be performed as a bit right shift operation. In such cases, the techniques in this example can be implemented using the following expressions:

[0111] The above expressions can be based on "xi" having a precision of 6 bits, and "x2" and "x3", each having a precision of 5 bits. In some examples, an addition of "4" can also be included before the right shift operation in expressions for the purpose of rounding to a nearest integer when dividing by "8" (eg instead of simply rounding up for the lowest integer). Accordingly, slight modifications to these expressions can be used if the values are set to support other bit depths.
[0112] Using the techniques described above, the double linear interpolation to determine the initial context state can be performed without multiplications or divisions. This direct implementation is possible since the difference between the !P values employed is a power of "2".
[0113] In other examples, other QP values can also be used. Additionally, another bit depth distribution for each of the three quantized context state values can also be used. Additionally, more than 3 points (eg 4 or more points) can be used, so the function is multilinear (ie several linear parts joined together).
[0114] In other additional examples, the three points can be used to fit a parabola (eg, a second order polynomial) to determine the context state in the other QPs. Similarly, in other examples, four points can be used to fit a third-order polynomial.
[0115] Additionally, a clipping operation, for example, performed using the expression illustrated below, can also be included following the non-linear context state derivation process described above, in order to avoid context state values not allowed (for example, context state values that require more than 7 bits of data to represent each value). iInitState = min(max(1, iInitState), 126)
[0116] According to some examples consistent with the techniques of this description, the video encoder 20 of the source device 12 may be configured to encode data such as one or more blocks of video data (for example, one or more TUs of a CU), and a video decoder 30 of the target device 14 can be configured to receive the encoded data, e.g., the one or more encoded blocks of video data, from the video encoder 20. In other examples , as described above, the techniques of this description can be applicable to using context adaptive entropy coding to encode any one of a wide variety of data, for example, in addition to video data. As such, in some examples consistent with the techniques described, video encoder 20 and/or video decoder 30 may be other than decoding and decoding devices, other than video encoding and decoding devices, as illustrated in this example .
[0117] As an example, the video encoder 20 and/or video decoder 30 can be configured for context adaptive entropy encoding. In this example, one or more of video encoder 20 and video decoder 30 may include an encoder (e.g. entropy encoding unit 56 or entropy decoding unit 80) configured to determine one or more initialization parameters to a context-adaptive entropy encoding process (for example, a CABAC, SBAC, PIPE, or other process) based on one or more initialization parameter index values. For example, as will be described in more detail below, video encoder 20 and/or video decoder 30 can be configured to determine the one or more initial context states by mapping one or more initialization parameter index values for the one or more initialization parameters in one or more tables (that is, identifying the one or more initialization parameters in one or more tables based on one or more initialization parameter index values) or by calculating one or more initialization parameters using the one or more initialization parameter index values and one or more formulas.
[0118] The video encoder 20 and/or the video decoder 30 can be further configured to determine one or more initial context states for initializing one or more contexts of the context adaptive entropy encoding process based on one or more boot parameters. For example, video encoder 20 and/or video decoder 30 can be configured to determine one or more initial context states using one or more initialization parameters and a relationship, such as the linear relationship described above with reference to H .264/AVC and certain draft versions of HEVC. Additionally, the video encoder 20 and/or the video decoder 30 can be further configured to initialize one or more contexts of the context adaptive entropy encoding process based on one or more initial context states. For example, video encoder 20 and/or video decoder 30 can be configured to initialize one or more contexts of the context adaptive entropy encoding process based on one or more initial context states. For example, video encoder 20 and/or video decoder 30 can be configured to initialize each of the one or more contexts by designating one of the corresponding one or more initial context states as the context's current context state. respective.
[0119] Accordingly, the techniques of that description may allow the video encoder 20 and/or the video decoder 30 to have a relatively lower complexity when used to encode data, such as, for example, the video data described above , compared to other systems used to encode similar data. In particular, the techniques can reduce an amount of information that is stored within and/or transmitted to or from video encoder 20 and/or video decoder 30. Additionally, as described in greater detail below with reference to Figures 6 to 8, the techniques in this description may also allow the video encoder 20 and/or the video decoder 30 to encode the data more efficiently relative to other techniques. As such, there may be a relative reduction in complexity for the video encoder 20 and/or video decoder 30 used to encode the data, and a relative bit savings for an encoded bit stream that includes the encoded data, when in use of the techniques of this description.
[0120] Video encoder 20 and video decoder 30 may each be implemented as any one of a wide variety of suitable encoder or decoder circuitry, as applicable, such as one or more microprocessors, computer processors. digital signal (DSPs), application-specific integrated circuits (ASICs), field-programmable gate assemblies (FPGAs), discrete logic circuitry, software, hardware, firmware, or any combination thereof. Each of the video encoder 20 and video decoder 30 can be included in one or more encoders or decoders, any of which can be integrated as part of a combined video encoder/decoder (e.g., a "CODEC" of video). An apparatus including video encoder 20 and/or video decoder 30 may comprise an integrated circuit (IC), a microprocessor, and/or a wireless communication device such as a cell phone.
[0121] Figure 2 is a block diagram illustrating an example of a video encoder that can implement the techniques for context state and probability initialization for context adaptive entropy encoding, consistent with the techniques in this description. Video encoder 20 can perform intra and intercoding of video blocks within video slices. Intracoding relies on spatial prediction to reduce or remove spatial redundancy in video within a given video or image frame. Intercoding relies on temporal prediction to reduce or remove temporal redundancy in video within adjacent frames or images of a video sequence. Intramode (mode I) can refer to any of the spatially based compression modes. Intermodes, such as unidirectional prediction (P mode) or bidirectional prediction (B mode), can reference any of several time-based compression modes.
[0122] In the example of Figure 2, the video encoder 20 includes a partition unit 35, a prediction module 41, a reference image memory 64, an adder 50, a transform module 52, a quantization unit 54 , an entropy coding unit 56. The prediction module 41 includes the motion estimation unit 42, motion compensation units 44, and an intraprediction module 46. For video block reconstruction, the video encoder 20 also includes inverse quantization unit 58, inverse transform module 60, and adder 62. An unblocking filter (not shown in Figure 2) may also be included to filter block boundaries for removing blocking artifacts from the reconstructed video. If desired, the unblocking filter typically filters the output of the adder 62. Additional circuit filters (in-circuit or post-circuit) can also be used in addition to the unblocking filter.
[0123] As illustrated in Fig. 2, the video encoder 20 receives video data, and the partition unit 35 divides the data into video blocks. This partition can also include partition into slices, tiles, or other larger units, in addition to video block partition, for example, according to a quadtree structure of LCUs and Cus. Video encoder 20 generally illustrates the components that encode video blocks within a video slice to be encoded. The slice can be split into multiple video blocks (and possibly sets of video blocks referred to as tiles). The prediction module 41 can select one of a plurality of possible coding modes, such as one of a plurality of intracoding modes or one of a plurality of intercoding modes, for the current video block based on the error results ( eg encoding rate and distortion level). The prediction module 41 can provide the resulting intra- or intercoded block to the adder 50 to generate the residual block data and to the adder 62 to reconstruct the coded block for use as a reference picture.
[0124] The intrapredict module 46 within the prediction module 41 can perform intrapredictable encoding of the current video block with respect to one or more neighboring blocks in the same frame or slice as the current block to be encoded to provide spatial compression. The motion estimation unit 42 and motion compensation unit 44 within the prediction module 41 perform interpredictable encoding of the current video block with respect to one or more prediction blocks in one or more reference pictures to provide temporal compression .
[0125] The motion estimation unit 42 can be configured to determine the interpredict mode for a video slice according to a predetermined pattern for a video sequence. The default pattern can designate video slices in the sequence as P slices, B slices, or GPB slices. The motion estimation unit 42 and motion compensation unit 44 can be highly integrated, but are illustrated separately for conceptual purposes. Motion estimation, performed by motion estimation unit 42, is the process of generating motion vectors, which estimate motion for video blocks. A motion vector, for example, can indicate the displacement of a PU of a video block within a current video frame or image relative to a preview block within a reference image.
[0126] A prediction block is a block that is considered to match the PU of the video block to be encoded in terms of pixel difference, which can be determined by sum of absolute difference (SAD), sum of square difference (SSD) ), or other difference metrics. In some examples, video encoder 20 can calculate values for sub-integer pixel positions of reference pictures stored in reference picture memory 64. For example, video encoder 20 can interpolate position values by a quarter of pixel, eighth pixel positions, or other fractional pixel positions of the reference image. Therefore, the motion estimation unit 42 can perform a motion search with respect to whole pixel positions and fractional pixel positions and sends a motion vector with fractional pixel precision.
[0127] The motion estimation unit 42 calculates a motion vector for a PU of a video block in an intercoded slice by comparing the PU position with the position of a prediction block of a reference image. The reference image can be selected from a first reference image list (List 0) or a second reference image list (List 1), each of which identifies one or more reference images stored in image memory. reference 64. The motion estimation unit 42 sends the calculated motion vector to the entropy coding unit 56 and motion compensation unit 44.
[0128] Motion compensation, performed by motion compensation unit 44, may involve collecting or generating prediction block based on motion vector determined by motion estimation, possibly performing interpolation with subpixel precision. Upon receipt of the motion vector for the PU of the current video block, the motion compensation unit 44 can locate the prediction block to which the motion vector points in one of the reference picture lists. The video encoder 20 was a residual video block by subtracting the pixel values of the preview block from the pixel values of the current video block being encoded, forming the pixel difference values. Pixel difference values form residual data for the block, and can include both luminescence and chrominance difference components. The adder 50 represents the component or components that perform this subtraction operation. The motion compensation unit 44 can also generate syntax elements associated with the video blocks and the video slice for use by the video decoder 30 in decoding video blocks from the video slice.
[0129] The intrapredict module 46 can intrapredict a current block, as an alternative to the interpredict performed by the motion estimation unit 42 and motion compensation unit 44, as described above. In particular, intrapredict module 46 can determine an intrapredict mode for use to encode a current block. In some examples, intrapredict module 46 may encode a current block using multiple intrapredict modes, for example, during separate encoding passes, and intrapredict module 46 (or mode selection unit 40 in some examples) may select an intrapredict mode suitable for use from the tested modes. For example, intrapredict module 46 can calculate rate distortion values using a rate distortion analysis for the various tested intrapredict modes, and select the intrapredict mode having the best rate distortion characteristics among the tested modes. Rate distortion analysis generally determines an amount of distortion (or error) between an encoded block and an original uncoded block that was encoded to produce the encoded block, in addition to a bit rate (that is, a number of bits) used to produce the coded block. The intrapredict module 46 can calculate the skew and rate ratios for the various coded blocks to determine which intrapredict mode exhibits the best rate skew value for the block.
[0130] In any case, after selecting an intrapredict mode for a block, the intrapredict module 46 can provide information indicative of the selected intrapredict mode for the block for entropy encoding unit 56. The entropy encoding unit 56 can encode information indicating the selected intrapredict mode into a transmitted bit stream.
[0131] After the prediction module 41 generates the prediction block for the current video block through inter-prediction or intra-prediction, the video encoder 20 forms a residual video block by subtracting the prediction block from the current video block. The residual video data in the residual block can be included in one or more TUs and applied to the transform module 52. The transform module 52 transforms the residual video data into residual transform coefficients using a transform such as a cosine transform. discrete (DCT) or a conceptually similar transformation. Transform module 52 can convert residual video data from a pixel domain to a transform domain, such as a frequency domain.
[0132] The transform module 52 can send the resulting transform coefficients to the quantize unit 54. The quantize unit 54 quantizes the transform coefficients to further reduce the bit rate. The quantization process can reduce the bit depth associated with some or all of the coefficients. The degree of quantization can be modified by adjusting a quantization parameter. In some examples, the quantization unit 54 can then perform a digitization of the matrix including the quantized transformation coefficients. Alternatively, the entropy coding unit 56 can perform the digitization.
[0133] Following the quantization, the entropy coding unit 56 entropy codes the quantized transformation coefficients. For example, the entropy encoding unit 56 can perform CAVLC, CABAC, SBAC, PIPE or any other entropy encoding methodology or technique. Following entropy coding by entropy coding unit 56, the encoded bit stream can be transmitted to video decoder 30, or reached for further transmission or retrieval by video decoder 30. Entropy coding unit 56 can also be entropy encode motion vectors and other syntax elements for the current video slice being encoded.
[0134] The inverse quantization unit 58 and the inverse transform module 60 apply inverse quantization and inverse transform, respectively, to reconstruct the residual block in the pixel domain for future use as a reference block of a reference image. The motion compensation unit 44 can calculate a reference block by adding the residual block to a prediction block from one of the reference pictures within one of the reference picture lists. The motion compensation unit 44 may also apply one or more interpolation filters to the reconstructed residual block to calculate sub-integer pixel values for use in motion estimation. The adder 62 adds the reconstructed residual block to the motion compensated prediction block produced by the motion compensation unit 44 to produce a reference block for storage in the reference image memory 64. The reference block can be used by the estimating unit of motion 42 and the motion compensation unit 44 as a reference block for interpredicting a block in a subsequent frame or video image.
[0135] As an example, an apparatus including the entropy encoding unit 56 (for example, the video encoder 20 of the source device 12 of figure 1) can be configured for context adaptive entropy encoding. For example, the apparatus can be configured to perform any of the CABAC, SBAC or PIPE processes described above, in addition to any other context-adaptive entropy encoding processes. In this example, the entropy encoding unit 56 can be configured to determine one or more initialization parameters (e.g., one or more "m" and "n" parameters described above with reference to Figures 1 and 2) for a process of context adaptive entropy encoding (eg, a CABAC process) based on one or more initialization parameter index values (eg, one or more "idx_m" and "idx_n" values also described above with reference to figure 1 ). Additionally, the entropy encoding unit 56 may be further configured to determine one or more initial context states for initializing one or more contexts of the context adaptive entropy encoding processes based on one or more initialization parameters. The entropy encoding unit 56 may further be configured to initialize one or more contexts of the context adaptive entropy encoding process based on one or more initial context states.
[0136] In some examples, one or more individual parameters may be included in one or more tables. In these examples, to determine one or more initialization parameters based on one or more initialization parameter index values, the entropy encoding unit 56 can be configured to map one or more of the initialization parameter index values to one or more initialization parameters in one or more tables. In other words, to determine one or more initialization parameters based on one or more initialization parameter index values, the entropy encoding unit 56 can be configured to identify one or more initialization parameters in the one or more tables. based on one or more initialization parameter index values.
[0137] Alternatively, in other examples, to determine one or more initialization parameters based on one or more initialization parameter index values, the entropy encoding unit 56 can be configured to calculate the one or more parameters of initialization using the one or more initialization parameter index values and one or more formulas. In these examples, each of the one or more formulas can be implemented using only one or more operations, each selected from a group consisting of a bit-shift operation, an addition operation, a subtraction operation, a multiplication operation, and a division operation.
[0138] In further examples, the one or more boot parameters may include one or more slope values and one or more intersection values, and the one or more boot parameter index values may include one or more index values slope and one or more intersection index values. In these examples, to determine one or more initialization parameters based on one or more initialization parameter index values, the entropy coding unit 56 can be configured to determine one or more slope values based on one or more slope index values, and determine one or more intersection values based on one or more intersection index values.
[0139] Alternatively, in some examples, the one or more initialization parameters may include one or more slope values and one or more intersection values. In these examples, to determine one or more initialization parameters based on one or more initialization parameter index values, the entropy encoding unit 56 can be configured to determine at least one of the one or more slope values and at least one of the one or more intersection values based on a single of the one or more initialization parameter index values.
[0140] In the examples described above, the single of the one or more initialization parameter index values may include one or more slope index value components and one or more of the intersection index value components. In these examples, to determine the at least one of the one or more slope values and the at least one of the one or more intersection values based on a single of the one or more initialization parameter index values, the entropy coding unit 56 can be configured to determine the at least one of the one or more slope values based on one or more slope index value components and determines the at least one of the one or more intersection values based on one or more of the intersecting index value components.
[0141] Additionally, in these examples, to determine the at least one of the one or more slope values based on one of the one or more slope index value components, and to determine at least one of the one or more intersection values based on one or more intersection index value components, the entropy coding unit 56 can be configured to determine one of the one or more slope index value components and the one or more value components of the intersection index of the single one of the one or more initialization parameter index values using one or more bit-shift operations, and determining another one of the one or more slope index value components and the one or more components of Intersection index value of the single among the one or more initialization parameter index values using one or more logical AND operations.
[0142] In other additional examples, the single one of the one or more initialization parameter index values may include a predetermined number of bits. In these examples, each of the one or more slope index value components and the one or more intersection index value components may include a respective subset of the predetermined number of bits. Furthermore, in these examples, each of the subsets that correspond to one or more of the tilt index value components may include a different number of predetermined number of bits from each of the subsets that correspond to one or more of the index value components. of intersection.
[0143] Additionally, in some examples, the one or more contexts of the context adaptive entropy encoding process may include a subset of contexts of the context adaptive entropy encoding process. For example, the subset may correspond to a type of syntax associated with the encoded video data using the context adaptive entropy encoding process. In some examples, the syntax type may include one or more of the component type, a block size, a transform size, a predictive mode, a motion information, and transform coefficient information, associated with the data of video.
[0144] In other examples, the apparatus (for example, video encoder 20 of the source device 12 of Figure 1) which includes the entropy encoding unit 56 may be configured as a video encoder. In these examples, the video encoder can be configured to encode one or more syntax elements associated with a video data bit based on the one or more contexts initialized from the context adaptive entropy encoding process, and sends the one or more syntax elements encoded in a bit stream. In some examples, as previously described, the apparatus (e.g., the video encoder 20 of the source device 12 of Figure 1) which includes the entropy encoding unit 56 may include at least one of an integrated circuit, a microprocessor, and a wireless communication device including the entropy encoding unit 56.
[0145] As described in more detail below with reference to figures 5 to 8, in other examples, the video encoder 20, or various components thereof (for example, entropy coding unit 56) can be configured to perform other techniques that refer to context state and probability initialization for context adaptive entropy encoding. For example, the techniques described below with reference to Figure 5, which are similar to the techniques in this example, and the additional techniques described below with reference to Figures 6 to 8, can be performed by the video encoder 20, or any components thereof. , individually, or in any combination. As an example, one or more of the additional techniques can be performed in combination with the techniques in this example (and the example in Figure 5) which refer to the initialization contexts of a context adaptive entropy encoding process used to encode the data based on one or more initialization parameter index values. In particular, the techniques described below with reference to figures 6 to 8 refer to initiating one or more contexts of an adaptive entropy and context encoding process, including determining the initial context states for the contexts that indicate the probabilities initial probabilities of the contexts, or by directly determining the initial probabilities of the contexts, so that the initial probabilities are more accurate with respect to the initial probabilities determined using other techniques.
[0146] Accordingly, as illustrated by the examples above, and as will be illustrated by the examples in Figures 5 to 8, the techniques of this description may allow the video encoder 20, or any components thereof, to encode various types of data, such as, for example, the video data described above, more efficiently than when using other methods. As an example, as illustrated by the examples above (and as will be illustrated by the example in Figure 5), the techniques can allow the video encoder 20 to be less complex than other systems when encoding data using the encoding process. by context adaptive entropy. For example, the techniques can reduce an amount of information (e.g., a number of bits of data) stored within the video encoder 20 and/or transmitted from the video encoder 20 to a video decoder (e.g., the video decoder 30) for the purpose of initializing one or more contexts of the context adaptive entropy encoding process. In particular, the amount of information stored can be reduced by storing and/or transmitting initialization parameter index values that indicate the initialization parameters used to initialize the contexts, rather than storing and/or transmitting the initialization parameters directly. .
[0147] In some examples, the amount of stored information can be reduced by setting initialization parameter index values so that initialization parameter index values are represented using less information (eg, fewer bits of data) than the initialization parameters. As a result, initialization parameter index values can only match a subset of initialization parameters. In this way, less than all of the initialization parameters, as indicated by the initialization parameter index values, can be used to initialize the contexts. For example, some of the contexts can be initialized using common initialization parameters. Notwithstanding, any adverse effects associated with using the subset of initialization parameters rather than all initialization parameters (for example, the initial probabilities of the contexts being relatively less accurate compared to the initial probabilities determined using all of the initialization parameters initialization, where each context is initialized using a single one of one or more initialization parameters), can be outweighed by the reduced amount of information stored within video encoder 20 and, in some cases, transmitted from video encoder 20 to the video decoder as described above.
[0148] Thus, in some examples, the initialization parameter index values indicating the subset of initialization parameters, and the subset of initialization parameters itself, can be stored within the video encoder 20, possibly like this, reducing the amount of information stored within the video encoder 20. For example, in some cases, since the initialization parameter index values can be represented using less information than the initialization parameters, and since the values Initialization parameter index values can correspond to only a subset of the initialization parameters, a total amount of information (for example, a total number of data bits) used to store the initialization parameter index values and the subset of parameters initialization within the video encoder 20 can be reduced with respect to an amount of information. It would not be necessary to store all initialization parameters within video encoder 20. Additionally, in some cases, initialization parameter index values, rather than initialization parameters, can be transmitted from video encoder 20 to the video decoder, thus reducing a total amount of information transmitted from the video encoder 20 to the video decoder.
[0149] As another example, as will be illustrated by the examples in Figures 6 to 8, the techniques in this description can improve data compression when the video encoder 20 is configured to encode the data using the entropy encoding process context adaptive. For example, the techniques can improve data compression by allowing the video encoder 20, or any components thereof, to initialize one or more contexts of the context adaptive entropy encoding process so that one or more contexts include initial probabilities relatively more accurate compared to the initial probabilities determined using other context initialization techniques. Additionally, in some examples, the techniques can further improve data compression by allowing the video encoder 20, or any components thereof, to subsequently update context probabilities so that updated probabilities are more accurate compared to updated probabilities using up other updated contextual probability techniques.
[0150] Accordingly, there can be significant bit savings for an encoded bit stream that includes the encoded data, and, in some cases, the initialization parameter index values transmitted from the video encoder 20 to the decoder of video (eg, video decoder 30), and a relative reduction in the complexity of the video encoder 20 used to encode the data, when using the techniques in that description.
[0151] Thus, the video encoder 20 represents an example of an apparatus for context adaptive entropy encoding, the apparatus comprising an encoder configured to determine one or more initialization parameters for a context adaptive entropy encoding process with based on one or more initialization parameter index values, determine one or more initial context states for initializing one or more contexts of the context adaptive entropy encoding process based on one or more initialization parameters, and initialize one or more contexts of the context adaptive entropy encoding process based on one or more initial context states.
[0152] Figure 3 is a block diagram illustrating an example of a video decoder that can implement the techniques for context state and initialization and probability for context adaptive entropy encoding, consistent with the techniques of this description. In the example of Fig. 3, the video decoder 30 includes an entropy decoding unit 80, a prediction module 81, an inverse quantization unit 86, an inverse transform module 88, an adder 90, a reference picture memory 92. The prediction module 81 includes the motion compensation unit 82 and the intrapredict module 84. The video decoder 30 may, in some examples, perform a generally staggered decoding pass with respect to the encoding pass described with respect to the video encoder 20 from figure 2.
[0153] During the decoding process, the video decoder 30 receives an encoded video bit stream representing the video blocks of an encoded video slice and associated syntax elements from the video encoder 20. entropy decoding 80 of video decoder 30 entropy decodes the bit stream to generate quantized coefficients, motion vectors, and other syntax elements. The entropy decoding unit 80 sends the motion vectors and other syntax elements to the prediction module 81. The video decoder 30 can receive syntax elements at video slice level and/or video block level.
[0154] When the video slice is encoded as an intracoded (I) slice, the intrapredict module 84 of the prediction module 81 can generate prediction data for a video block of the current video slice based on an intrapredict mode signaled and previously decoded block data of the current frame or picture. When the video frame is encoded as an intercoded slice (ie B, P or GPB), the motion compensation unit 82 of the prediction module 81 produces prediction blocks for one video block of the current video slice on the basis of in the motion vectors and other syntax elements received from the entropy decoding unit 80. Predictive blocks can be produced from one of the reference pictures within one of the reference picture lists. The video decoder 30 can construct the reference frame lists, List 0 and List 1, using standard construction techniques based on the reference pictures stored in the reference picture memory 92.
[0155] The motion compensation unit 82 determines the prediction information for a video block from the current video slice by analyzing the motion vectors and other syntax elements, and uses the prediction information to produce the prediction blocks for the current video block being decoded. For example, the motion compensation unit 82 uses some of the received syntax elements to determine a prediction mode (e.g. intra or inter-prediction) used to encode the video blocks of the video slice, an inter-prediction slice type ( eg B slice, P slice or GPB slice), construction information for one or more of the reference image lists for the slice, motion vectors for each slice's intercoded video block, interpredict situation for each video block intercoded slice, and other information to decode the video blocks in the current video slice.
[0156] The motion compensation unit 82 can also perform interpolation based on the interpolation filters. The motion compensation unit 82 can use interpolation filters as used by the video encoder 20 when encoding the video blocks to calculate the interpolated values for the subinteger pixels of the reference blocks. In that case, the motion compensation unit 82 can determine the interpolation filters used by the video encoder 20 from the received syntax elements and use the interpolation filters to produce the prediction blocks.
[0157] The inverse quantization unit 86 inversely quantizes, that is, dequantizes, the quantized transformation coefficients provided in the bit stream and decoded by the entropy decoding unit 80. The inverse quantization process may include the use of a quantization parameter calculated by video encoder 20 for each video block in the video slice to determine a degree of quantization and, likewise, a degree of inverse quantization that should be applied. The inverse transform module 88 applies an inverse transform, for example, an inverse DCT, an inverse integer transform, or an inverse transform process conceptually similar to the transform coefficients, in order to produce residual blocks in the pixel domain.
[0158] After the motion compensation unit 82 generates the prediction block for the current video block based on the vectors and motion and other syntax elements, the video decoder 30 forms a video block decoded by the sum of the blocks residuals of the inverse transform module 88 with the corresponding prediction blocks generated by the motion compensation unit 82. The adder 90 represents the component or components that perform this summing operation. If desired, the unlocking filter can also be applied to filter the decoded blocks in order to remove locking artifacts. Other circuit filters (in the encoding circuit or after the encoding circuit) can also be used to smooth out pixel transitions, or otherwise improve video quality. The decoded video blocks in a given frame or picture are then stored in the reference picture memory 92, which stores the reference pictures used for subsequent motion compensation. The reference picture memory 92 also stores the decoded video for later presentation on a display device, such as a display device 28 of Figure 1.
[0159] As an example, an apparatus including the entropy decoding unit 80 (for example, the video decoder 30 of the target device 14 of Fig. 1) can be configured for context adaptive entropy encoding. For example, the apparatus can be configured to perform any of the CABAC, SBAC or PIPE processes described above. In this example, the entropy decoding unit 80 can be configured to determine one or more initialization parameters (for example, one or more of the "m" and "n" parameters described above with reference to Figures 1 and 2) to a context adaptive entropy encoding process (eg a CABAC process) based on one or more initialization parameter index values (eg one or more "idx_m" and "idx_n" values also described above with reference to figure 1). Additionally, the entropy decoding unit 80 can be further configured to determine one or more initial context states for initializing one or more contexts of the context adaptive entropy encoding process based on one or more initialization parameters. The entropy decoding unit 80 can further be configured to initialize the one or more contexts of the context adaptive entropy encoding process based on one or more initial context states.
[0160] In some examples, the one or more initialization parameters may be included in one or more tables. In these examples, to determine the one or more boot parameters based on one or more boot parameter index values, the entropy decoding unit 80 can be configured to map one or more boot parameter index values to one or more initialization parameters in the one or more tables. In other words, to determine the one or more boot parameters based on one or more boot parameter index values, the entropy decoding unit 80 can be configured to identify one or more boot parameters within the one or more tables based on one or more initialization parameter index values.
[0161] Alternatively, in other examples, to determine one or more initialization parameters based on the one or more initialization parameter index values, the entropy decoding unit 80 can be configured to calculate one or more initialization parameters using one or more initialization parameter index values and one or more formulas. In these examples, each of the one or more formulas can be implemented using only one or more operations, each selected from a group consisting of a bit-shift operation, and addition operation, a subtraction operation, an operation of multiplication and a division operation.
[0162] In other examples, the one or more initialization parameters may include one or more slope values and one or more intersection values, and the one or more initialization parameter index values may include one or more index values slope and one or more intersection index values. In these examples, to determine the one or more boot parameters based on one or more boot parameter index values, the entropy decoding unit 80 can be configured to determine the one or more slope values based on a or more slope index values, and determine the one or more intersection values based on the one or more intersection index values.
[0163] Alternatively, in some examples, the one or more initialization parameters may include one or more slope values and one or more intersection values. In these examples, to determine the one or more boot parameters based on one or more boot parameter index values, the entropy decoding unit 80 can be configured to determine at least one of the one or more slope values. and at least one of the one or more intersection values based on a single one of the one or more initialization parameter index values.
[0164] In the examples described above, the single value among the one or more initialization parameter index values may include one or more slope index value components and one or more intersection index value components. In these examples, to determine the at least one of the one or more slope values and the at least one of the one or more intersection values based on a single value of one or more initialization parameter index values, the unit entropy decoding 80 can be configured to determine the at least one of the one or more slope values based on one or more slope index value components, and determine the at least one of the one or more intersection values. based on one or more of the intersecting index value components.
[0165] Additionally, in these examples, to determine the at least one of the one or more slope values based on one or more slope index value components, and to determine the at least one of the one or more values of intersection based on one or more of the intersection index value components, the entropy decoding unit 80 can be configured to determine one or more of the slope index value components and the one or more index value components of intersecting the single value among the one or more initialization parameter index values using one or more of the bit shift operations, and determining another one of the one or more slope index value components and the one or more components of Intersection index value of the single among the one or more initialization parameter index values using one or more logical AND operations.
[0166] In other examples, the only one of the one or more initialization parameter index values may include a predetermined number of bits. In these examples, each of the one or more slope index value components and the one or more intersection index value components may include a respective subset of the predetermined number of bits. Furthermore, in these examples, each of the subsets that correspond to one or more of the slope indica value components may include a different number of predetermined number and bits than the subsets that correspond to one or more of the index value components. of intersection.
[0167] Additionally, in some examples, the one or more contexts of the context adaptive entropy encoding process may include a subset of contexts of the context adaptive entropy encoding process. For example, the subset may correspond to a type of syntax associated with the encoded video data using the context adaptive entropy encoding process. In some examples, the syntax type may include one or more of a component type, a block size, a transform size, a predictive mode, a motion information, and transform coefficient information associated with the data. of video.
[0168] In other examples, the apparatus (for example the video decoder 30 of the target device 14 of Fig. 1) which includes the entropy decoding unit 80 can be configured as a video decoder. In these examples, the video decoder can be configured to receive one or more encoded syntax elements associated with a block of video data in a bit stream, and decode one or more encoded syntax elements based on the one or more initialized contexts. of the context adaptive entropy encoding process.
[0169] In some examples, as previously described, the apparatus (for example, the video decoder 30 of the target device 14 of Figure 1) that includes the entropy decoding unit 80 may include at least one of an integrated circuit, a microprocessor, and a wireless communication device including the entropy decoding unit 80.
[0170] In a similar way as described above with reference to figure 2 and as described below with reference to figures 5 to 8, in other examples, the video decoder 30, or various components thereof (e.g., decoding unit by entropy 80) can be configured to perform other techniques that refer to context state and probability initialization for context adaptive entropy encoding. For example, the techniques described below with reference to Figure 5 which are similar to the techniques in this example and additional techniques described below with reference to Figures 6 to 8, can be performed by the video decoder 30, or any components thereof, individually, or in any combination. As an example, one or more of the additional techniques can be performed in combination with the techniques in this example (and the example in Figure 5) which refer to initialization contexts of a context adaptive entropy encoding process used to encode the data based on one or more initialization parameter index values. In particular, the techniques described below with reference to figures 6 to 8 refer to initiating one or more contexts of a context adaptive entropy encoding process, including determining the initial context states for the contexts that indicate the probabilities initial probabilities of the contexts, or by directly determining the initial probabilities of the contexts, so that the initial probabilities are more accurate with respect to the initial probabilities determined using other techniques.
[0171] Accordingly, as illustrated by the examples above, and as will be illustrated by the examples in Figures 5 to 8, the techniques of this description may allow the video decoder 30, or any components thereof, to decode the various types of data encoded, such as, for example, the encoded video data described above, more efficiently than when using other methods. As an example, as illustrated by the examples above (and as will be illustrated by the example in Figure 5), the techniques can allow the video decoder 30 to have less complexity relative to other systems when decoding the encoded data using the process of context-adaptive entropy coding. For example, the techniques can reduce an amount of information stored within video decoder 30 and/or transmitted from a video encoder (e.g., video encoder 20) to video decoder 30 for purposes of initialization of one or more contexts of the context adaptive entropy encoding process. In particular, the amount of information can be reduced by storing and/or transmitting initialization parameter index values that indicate the initialization parameters used to initialize the contexts, rather than storing and/or passing the initialization parameters directly.
[0172] In a manner similar to that described above with reference to Figure 2, in some examples, the amount of information can be reduced by setting initialization parameter index values so that initialization parameter index values are represented using less information than the initialization parameters. As a result, initialization parameter index values can only match a subset of initialization parameters. In this way, less than all of the initialization parameters as indicated by the initialization parameter index values can be used to initialize the contexts. For example, some of the contexts can be initialized using common initialization parameters. Notwithstanding, any adverse effects associated with using the subset of initialization parameters rather than all initialization parameters (for example, the initial probabilities of the contexts being relatively less accurate compared to the initial probabilities determined using all of the initialization parameters initialization, where each context is initialized using a single one of one or more initialization parameters), can be outweighed by the reduced amount of information stored within the video decoder 30, and, in some cases, transmitted from the video encoder to the 30 video decoder as described above.
[0173] Thus, in some examples, the initialization parameter index values indicating the subset of initialization parameters, and the subset of initialization parameters itself, can be stored within the video decoder 30, thus reducing possibly the amount of information stored within the video decoder 30. For example, in some cases, since the initialization parameter index values can be represented using less information than the initialization parameters, and since the values Initialization parameter index values can correspond to only a subset of initialization parameters, a total amount of information used to store the initialization parameter index values and the initialization parameter subsets within the video decoder 30 can be reduced with regarding the amount of information that would be needed to store all the initialization parameters within the video decoder 30. Additionally, in other cases, the initialization parameter index values, instead of the initialization parameters, can be transmitted from the video encoder to the video decoder 30, reducing , thus, a total amount of information transmitted from the video encoder to the video decoder 30.
[0174] As another example, as will be illustrated by the examples in Figures 6 to 8, the techniques in this description can improve data compression when a video encoder (eg, video encoder 20) is configured to encode the data, the video decoder 30 is configured to decode the encoded data using a context adaptive entropy encoding process. For example, the techniques can improve data compression by allowing the video decoder 30, or any components thereof, to initialize one or more contexts of the context adaptive entropy encoding process so that one or more contexts include initial probabilities relatively more accurate compared to the initial probabilities determined using other context initialization techniques. Additionally, in some examples, the techniques can further improve data compression by allowing the video decoder 30, or any components thereof, to subsequently update the probabilities of one or more contexts so that the updated probabilities are more accurate compared to the probabilities updated using other context probability updating techniques.
[0175] Accordingly, there may be a relative bit saving for an encoded bit sequence that includes the encoded data decoded by the video decoder 30, or any components thereof, and, in some cases, the parameter index values of initialization transmitted from a video encoder (e.g., video encoder 20) to the video decoder 30, and a relative reduction in the complexity of the video decoder 30 used to decode the encoded data, when using the techniques in that description.
[0176] Thus, the video decoder 30 represents an example of an apparatus for context adaptive entropy encoding, the apparatus comprising an encoder configured to determine one or one initialization parameters for a context adaptive entropy encoding process with based on one or more initialization parameter index values, determine one or more initial context states for initializing one or more contexts of the context adaptive entropy encoding process based on one or more initialization parameters, and initialize one or more contexts of the context adaptive entropy encoding process based on one or more initial context states.
[0177] Figure 4 is a conceptual diagram illustrating an example of a temporal hierarchy of a coded video sequence (CVS) coded using scalable video coding (SVC), consistent with the techniques of this description. As illustrated in Figure 4, a CVS may include a plurality of video frames, that is, frames 1 to 8, arranged in a temporal order, which may be referred to as an output or "display" order. When CVS is encoded using SVC, as shown in Figure 4, some of the CVS frames, ie frames 0, 4 and 8, can be encoded into a subset of frames, which can be referred to as a "base layer" of CVS, while other frames, that is, frames 1-3 and 5-7, can be encoded in one or more additional subsets of the CVS frames, each of which can be referred to as an "enhancement layer" of CVS. For example, the CVS base layer can be streamed and displayed on a display device. Additionally, one or more CVS enhancement layers can be selectively transmitted and displayed on the same display device along with the base layer. Thus, the CVS of Fig. 4 comprising the base layer and one or more enhancement layers described above can be referred to as a CVS encoded using SVC.
[0178] As illustrated by the example in Fig. 4, a particular video frame of a CVS that is encoded using SVC can be encoded in a hierarchical structure. As illustrated in Figure 4, frames 0, 4 and 8 can be encoded in a particular temporal layer (eg layer "0"), frames 2 and 6 can be encoded in another temporal layer (eg layer "1 "), and the remaining frames, that is, frames 1, 3, 5 and 7, can be encoded in another temporal layer (eg layer "2"). In the example of Figure 4, layer 0 can be referred to as a base layer, and each of layers 1 and 2 can be referred to as an enhancement layer. Additionally, the dependence between the frames in Figure 4 may not be symmetrical. In other words, frames encoded in the lower temporal layers (eg, layer 0) can serve as reference frames for frames encoded in the higher temporal layers (eg, layers 1 and 2), as indicated by the arrows illustrated in Figure 4. Conversely, frames encoded in higher temporal layers may not serve as reference frames for frames encoded in lower temporal layers.
[0179] According to the techniques of this description, a temporal layer associated with the video data, such as, for example, a video frame of a CVS encoded using SVC, can be used to initialize one or more contexts of a process Adaptive entropy encoding (eg, a CABAC process) used to encode the video data. For example, the temporal layer associated with the video data, which can be represented using a temporal layer parameter, can be used as part of determining the initial context states for one or more contexts of the adaptive entropy encoding process. of context, as described above with reference to figures 1 to 3, and as will be described in more detail below with reference to the illustrative methods of figures 5 to 8. Thus, the techniques of this description may, in some cases, allow the initialization of one or more contexts so that the initial probabilities indicated by the initial context states for one or more contexts are relatively more accurate compared to the initial probabilities determined using other techniques (eg, techniques that do not take into account a temporal layer associated with the video data when encoding video data using an encoding process by context adaptive entropy).
[0180] Figures 5 to 8 are flowcharts illustrating illustrative methods of initializing one or more contexts and probabilities of a context adaptive entropy encoding process, consistent with the techniques of this description. In particular, the techniques of the illustrative methods of Figures 5, 6 and 8 include determining initial context states for one or more contexts from a context adaptive entropy encoding process (e.g., a CABAC process) used to encode the data (eg video data). Additionally, the techniques of the illustrative method of Figure 7 include determining initial probability values of one or more contexts from a context adaptive entropy encoding process used to encode the data, in addition to updating the initial probability values based on the Dice.
[0181] The techniques of figures 5 to 8 can generally be performed by any processing unit or processor, implemented in hardware, software, firmware or a combination thereof, and when implemented in software or firmware, the corresponding hardware can be provided to run the instructions for software or firmware. For purposes of example, the techniques of Figures 5 to 8 are described with respect to the entropy coding unit 56 of the video encoder 20 (Fig. 2) and/or the entropy decoding unit 80 of the video decoder 30 (Fig. 3 ), although it is understood that other devices can be configured to perform similar techniques. Furthermore, the steps illustrated in figures 5 to 8 can be performed in a different order or in parallel, and additional steps can be added and certain steps omitted, without departing from the techniques in this description. Additionally, consistent with the techniques of this description, the techniques of the illustrative methods of Figures 5 to 8 may be performed individually or in combination with one another, including performing one or more of the techniques of the illustrative methods of Figures 5 to 8 sequentially or in parallel with one or more other techniques of the techniques of the illustrative methods of figures 5 to 8.
[0182] In the example of each of figures 5 to 8, initially, the entropy coding unit 56 and/or the entropy decoding unit 80 can receive a block of video data. For example, the block may comprise a macroblock, or a TU of a CU, as described above. In some examples, the entropy encoding unit 56 can encode the video data block using a context adaptive entropy encoding process (e.g., a CABAC process). Similarly, in other examples, in cases where the block is an encoded block of video data, the entropy decoding unit 80 can decode the block using the same context adaptive entropy encoding process or similar process as described above with reference to the entropy encoding unit 56. In other examples, the entropy encoding unit 56 and/or the entropy decoding unit 80 can encode or decode other types of data, e.g. data in addition to video data , using a context-adaptive entropy coding process. Thus, the illustrative methods of Figures 5 to 8 can be applicable to any encoding system that includes a video encoder, a video decoder, or any combination thereof, configured to encode video data using a by-coding process. context adaptive entropy. Additionally, the illustrative methods of Figures 5 to 8 can be applicable to techniques for encoding any of a wide variety of data, including data in addition to video data, using a context-adaptive entropy encoding process.
[0183] Figure 5 is a flowchart illustrating an illustrative method of initializing one or more contexts of a context adaptive entropy encoding process used to encode data based on one or more consistent initialization parameter index values with the techniques of this description. In other words, the techniques of the illustrative method of Figure 5 include determining an initial context state for each of the one or more contexts of a context adaptive entropy encoding process used to encode data using one or more initialization parameters , where the one or more boot parameters are determined using one or more boot parameter index values.
[0184] As an example, to encode a block of video data, or other types of data, using a context adaptive entropy encoding process (eg, a CABAC process) as described above, the unit of entropy encoding 56 and/or the entropy decoding unit 80 may determine one or more initialization parameters for the context adaptive entropy encoding process based on one or more initialization parameter index values (500). For example, the one or more initialization parameters can correspond to one or more "m" and "n" parameters described above. As also described above, the entropy encoding unit 56 and/or the entropy decoding unit 80 may use values of one or more parameters "m" and "n" to determine the initial context states for the contexts of the process. context adaptive entropy coding, for example, using the linear relationship described above with reference to H.264/AVC and certain draft versions of HEVC. Additionally, in accordance with the techniques of this description, the one or more initialization parameter index values may be represented using less information (e.g., fewer bits of data) than an amount of information used to represent the values of one or more initialization parameters.
[0185] In an example, in cases where one or more initialization parameters correspond to one or more parameters "m" and "n", the values of each of the one or more parameters "m" and "n" can be represented using 8 bits of data. As a result, in this example, 16 bits of data are used to represent each "pair" of parameter values "m" and "n" used to initialize a particular context. As an example, in cases where each boot parameter index value is used to determine a value of a particular parameter among the one or more "m" and "n" parameters, each boot parameter index value can be represented using 4 data bits, resulting in using 8 data bits to determine each pair of parameter values "m" and "n". As another example, in cases where each boot parameter index value is used to determine a particular pair of parameter values "m" and "n", each boot parameter index value can be represented using 8 data bits, again resulting in the use of 8 data bits to determine each pair of parameter values "m" and "n".
[0186] Thus, instead of storing and/or transmitting 16 bits of data in order to initialize a particular context, only 8 bits of data are stored and/or transmitted. Additionally, since one or more initialization parameter index values can only correspond to a subset of all possible initialization parameters, less than all possible initialization parameters can be used to initialize the contexts. For example, some of the contexts can be initialized using common initialization parameters. Nevertheless, any adverse effects associated with using the subset of initialization parameters, rather than all possible initialization parameters, can be outweighed by the reduced amount of information stored and/or transmitted, as described above.
[0187] The entropy encoding unit 56 and/or the entropy decoding unit 80 can additionally determine one or more initial context states for initializing one or more contexts of the context adaptive entropy encoding process based on a or more initialization parameters (502). For example, as described above, the encoding unit 56 and/or the entropy decoding unit 80 can determine one or more initial context states based on one or more initialization parameters using one or more relations, such as by example, the linear relationship described above with reference to H.264/AVC and certain draft versions of HEVC.
[0188] The entropy encoding unit 56 and/or entropy decoding unit 80 can further initialize the one or more contexts of the context adaptive entropy encoding process based on one or more initial context states (504) . For example, as also previously described, the encoding unit 56 and/or entropy decoding unit 80 can define a context state of a particular context among one or more contexts as a corresponding one among one or more initial context states. As also described previously, the context-initialized state of a particular context among one or more contexts can, in turn, indicate an initial probability of the context.
[0189] In some examples, the entropy encoding unit 56 and/or entropy decoding unit 80 may further entropy encode data (e.g., the video data block, or other types of data) based on the ones. or more contexts initialized from the context adaptive entropy encoding process (506). For example, the entropy encoding unit 56 and/or entropy decoding unit 80 can encode the data by performing context adaptive entropy encoding process based on the one or more initialized contexts described above. As previously described, the data may include video data such as, for example, a block of video data and/or any other type of data. Additionally, in other examples, the entropy encoding unit 56 and/or entropy decoding unit 80 can still further update the context states of the one or more contexts initialized from the context adaptive entropy encoding process in the data (508) . For example, the entropy encoding unit 56 and/or the entropy decoding unit 80 can update the initial probabilities of the one or more initialized contexts, as indicated by one or more initial context states described above, based on the data ( for example, based on one or more data values).
[0190] In some examples, the one or more initialization parameters may be included in one or more tables. In these examples, to determine the one or more initialization parameters based on one or more initialization parameter index values, the entropy encoding unit 56 and/or entropy decoding unit 80 can map one or more values of boot parameter index for one or more boot parameters in one or more tables.
[0191] In other examples, to determine the one or more initialization parameters based on the one or more initialization parameter index values, the entropy encoding unit 56 and/or the entropy decoding unit 80 can calculate one or more initialization parameters using the one or more initialization parameter index values and one or more formulas. For example, each of the one or more formulas can be implemented using only one or more operations, each selected from a group consisting of a bit-shift operation, an addition operation, a subtraction operation, an operation of multiplication, and a division operation.
[0192] In further examples, the one or more boot parameters may include one or more slope values and one or more intersection values, and the one or more boot parameter index values may include one or more index values slope and one or more intersection index values. In these examples, to determine one or more initialization parameters based on one or more initialization parameter index values, the entropy encoding unit 56 and/or entropy decoding unit 80 can determine the one or more values of slope based on one or more index values, and determine the one or more intersection values based on one or more intersection index values.
[0193] In some examples, the one or more initialization parameters may again include one or more slope values and one or more intersection values. In these examples, to determine one or more initialization parameters based on one or more initialization parameter index values, the entropy encoding unit 56 and/or the entropy decoding unit 80 can determine at least one of the one or more slope values and at least one of the one or more intersection values based on a single value among the one or more initialization parameter index values. In other words, in some examples, each initialization parameter index value can be used to determine one or more slope values and one or more intersection values. As an example, each initialization parameter index value can be mapped to one or more slope values and one or more intersection values in one or more tables. As another example, each initialization parameter index value can be used to calculate one or more slope values and one or more intersection values based on one or more formulas. In other examples, however, each initialization parameter index value may include one or more components or "subsets", which can be used to determine the one or more slope values and the one or more intersection values, as described. in more detail below.
[0194] For example, in some examples, the only one of the one or more initialization parameter index values may include one or more slope index value components and one or more intersection index value components. In these examples, to determine the at least one of the one or more slope values and the at least one of the one or more intersection values based on the single one of the one or more slope index value components, coding unit per entropy 56 and/or entropy decoding unit 80 can determine the at least one of the one or more slope values based on one or more slope index value components, and determine the at least one of the one or more intersection values based on one or more intersection index value components.
[0195] Additionally, in other examples, to determine the at least one of one or more slope values based on one or more slope index value components, and determine the at least one of the one or more slope values intersection based on one or more of the intersection index value components, the entropy encoding unit 56 and/or the entropy decoding unit 80 can determine one of the one or more slope index value components and a or more of the intersecting index value components of the single among one or more initialization parameter index values using one or more bit shift operations, and determining another of the one or more slope index value components and one or more intersecting index value components of the single of the one or more initialization parameter index values using one or more logical AND operations.
[0196] In other added examples, the single one of the one or more initialization parameter index values may include a predetermined number of bits, where each of the one or more slope index value components and one or more of the intersection index value includes a respective subset of the predetermined number of bits. In some examples, each of the subsets corresponding to one or more slope index value components may include a different number of the predetermined number of bits than each of the subsets corresponding to one or more intersection index value components . In other examples, each of the subsets corresponding to one or more slope index value components may include the same predetermined number of bits as each of the subsets corresponding to one or more intersecting index value components.
[0197] Additionally, in some examples, the one or more contexts of the context adaptive entropy encoding process may include a subset of contexts of the context adaptive entropy encoding process. For example, the subset may correspond to a type of syntax associated with the encoded video data using the context adaptive entropy encoding process. In some examples, the syntax type may include one or more of a component type, a block size, a transform size, a predictive mode, a motion information, and a transform coefficient information, associated with the video data.
[0198] Thus, the method of Figure 5 represents an example of a context adaptive entropy encoding method, the method comprising determining one or more initialization parameters for a context adaptive entropy encoding process based on one or more initialization parameter index values, determining one or more initial context states for initializing one or more contexts of the context adaptive entropy encoding process based on one or more initialization parameters, and initializing of one or more contexts of the context adaptive entropy encoding process based on one or more initial context states.
[0199] Figure 6 is a flowchart illustrating an illustrative method of initializing one or more contexts of a context adaptive entropy encoding process used to encode video data based on a temporal layer associated with the video data , consistent with the techniques in this description. In other words, the techniques of the illustrative method of Figure 6 include determining an initial context state for each of the one or more contexts of a context adaptive entropy encoding process used to encode the video data using one or more initialization parameters and a temporal camera parameter that is indicative of a temporal layer associated with the video data.
[0200] As an example, to encode a block of video data, or other types of data, using the context adaptive entropy encoding process (eg a CABAC process) as described above, the encoding unit per entropy 56 and/or the entropy decoding unit 80 may determine one or more initial context states for initializing one or more contexts of a context adaptive entropy encoding process used to encode video data based on one or more initialization parameters and a temporal layer parameter associated with the video data (600). For example, the temporal layer parameter can correspond to a syntax element for the video data, where a syntax element value indicates a temporal layer associated with the video data. As described above with reference to Figure 4, the temporal layer associated with the video data may correspond to a location of the video data (e.g. of a particular video frame of the video data) within a temporal hierarchy associated with the video data.
[0201] In this way, the initial probabilities of one or more contexts, as indicated by the one or more initial context states, can be relatively more accurate compared to the initial probabilities of the contexts determined using other techniques, for example, techniques which determine the initial context states for contexts of a context adaptive entropy encoding process used to encode the video data, without taking into account a temporal layer associated with the video data.
[0202] Subsequently, the entropy encoding unit 56 and/or the entropy decoding unit 80 may further initialize the one or more contexts of the context adaptive entropy encoding process based on one or more initial context states ( 602), as described above with reference to Fig. 5. Additionally, in some examples, the entropy encoding unit 56 and/or the entropy decoding unit 80 can further entropy encode the video data based on the one or more initialized contexts of the adaptive entropy encoding process (604), and in some cases updating the context states of the one or more contexts initialized context adaptive entropy encoding process based on the video data (606)m from a shape similar to that described above with reference to figure 5.
[0203] In some examples, the video data may include one of a video data frame and a slice of a video data frame. In these examples, the temporal layer parameter may include a temporal layer of the respective video data frame.
[0204] In other examples, to determine the one or more initial context states based on one or more initialization parameters and the temporal layer parameter, the entropy encoding unit 56 and/or the entropy decoding unit 80 can determine the one or more initial context states based on one or more initialization parameters and one of an offset parameter that varies based on the time layer parameter and an initialization quantization parameter value that varies based on the temporal layer parameter.
[0205] In some examples, to determine one or more initial context states based on one or more initialization parameters and the offset parameter, the entropy encoding unit 56 and/or the entropy decoding unit 80 may modify a quantization parameter value associated with the video data using offset parameter. In other examples, each of the one or more offset parameters and initialization quantization parameter value may further vary based on one or more of a slice type, a frame resolution, and a reference frame list size. associated with the video data. Additionally, in some examples, the entropy encoding unit 56 and/or the entropy decoding unit 80 may further encode one or more of the offset parameter and the initialization quantization parameter value to be included in at least one from a set of image parameters (PPS), a set of sequence parameters (SPS) and a set of adaptation parameters (APS) associated with the video data, for example, or another set of parameters or syntax location of high level.
[0206] Additionally, in some examples, one or more contexts from the context adaptive entropy encoding process may include a subset of contexts from the context adaptive entropy encoding process. As an example, the subset might correspond to a syntax type associated with the video data. As another example, the syntax type may include one or more of a component type, a block size, a transform size, a predictive mode, motion information, and transform coefficient information associated with the video data. .
[0207] Thus, the method of Figure 6 represents an example of a context adaptive entropy encoding method, the method comprising determining one or more initial context states for initializing one or more contexts of an encoding process by context adaptive entropy used to encode the video data based on one or more initialization parameters and a temporal layer parameter associated with the video data, and initializing one or more contexts of the context adaptive entropy encoding process with based on one or more initial context states.
[0208] Figure 7 is a flowchart that illustrates an illustrative method of initializing one or more contexts of a context-adaptive entropy encoding process based on one or more probability deviations, consistent with the techniques of this description. In other words, the techniques of the illustrative method of Figure 7 include determining an initial probability of each of the one or more contexts of a context adaptive entropy encoding process used to encode the data, so that the initial probability of each context is located within a subset of a range of overall probability values defined by a lower bound and an upper bound. In the illustrative method of Figure 7, the subset is defined by one or more of the probability deviations from one or more of the lower bound and the upper bound of the range of probability values.
[0209] As an example, to encode a block of video data, or other types of data, using a context adaptive entropy encoding process (eg, a CABAC process) as described above, the encoding unit per entropy 56 and/or entropy decoding unit 80 can determine a first value (700). For example, the first value may correspond to an initial probability of a particular context from the context adaptive entropy encoding process which is derived using a given context probability initialization technique. In this example, the initial probability of the context can match a "default" initial probability, for example, derived using the context probability initialization technique described above with reference to H.264/AVC and HEVC. Unlike other techniques, however, the techniques of the illustrative method of Figure 7 include determining whether a second value, different from the first value, can result in a relatively more accurate (eg, less skewed) initial probability of the context, and , based on the determination, select the first value or the second value as the initial probability of the context.
[0210] For example, in case the first value falls within a range of values defined by a lower limit, an upper limit, and one or more deviations from one or more of the lower and upper limits (702, "YES "), the entropy coding unit 56 and/or the entropy decoding unit 80 can additionally select the first value (704). In case the first value is outside the range of values (702, "NO"), however, the entropy encoding unit 56 and/or the entropy decoding unit 80 may instead select a second value , where the second value is different from the first value (706). As explained above, the second value can correspond to a probability different from the context that is more accurate with respect to the probability corresponding to the first value, and vice versa.
[0211] Thus, the initial probability of the context can be relatively more accurate compared to an initial probability of a given context using other techniques, for example, techniques that determine an initial probability of a context of an encoding process by Context adaptive entropy used to encode the data without taking into account a relative location of the initial probability within a range of probability values. For example, according to some techniques, the initial probability of the context can be located relatively close to a lower bound and an upper bound of the range of probability values, possibly resulting in an inaccurate initial probability.
[0212] Subsequently, the entropy encoding unit 56 and/or the entropy decoding unit 80 can initialize a probability of a context of a context adaptive entropy encoding process based on the selected first or second value (708) . For example, the entropy encoding unit 56 and/or the entropy decoding unit 80 can initialize the context probability by setting the context probability as the first or second selected value.
[0213] In some examples, the entropy encoding unit 56 and/or the entropy decoding unit 80 can further encode the entropy data (for example, the video data block, or other types of data) on the basis of in the context-initialized probability of the context-adaptive entropy encoding process (710). For example, the entropy encoding unit 56 and/or the entropy decoding unit 80 can encode the data by performing the context adaptive entropy encoding process based on the initialized probability described above, in addition to based on one or plus other probabilities of contexts initialized in the same or similar way as described above.
[0214] Additionally, in other examples, the entropy encoding unit 56 and/or the entropy decoding unit 80 can still further update the context initialized probability of the context adaptive entropy encoding process based on the data and a or more deviations (712). For example, the entropy coding unit 56 and/or the entropy decoding unit 80 can update the initialized probability of the context based on the data (eg based on one or more data values) in a similar way. to that described above with reference to Figures 5 and 6. In addition, the entropy encoding unit 56 and/or the entropy decoding unit 80 can also update the initialized probability of the context based on one or more offsets, so that the updated probability is also located within the previously described subset of the range of overall probability values defined by the lower bound and the upper bound. In other words, in some examples, the techniques of the illustrative method in Figure 7 may be applicable to determining an initial probability of a particular context, in addition to continuously updating the probability of the context. In other examples, however, the entropy coding unit 56 and/or the entropy decoding unit 80 can update the initialized probability of the context based on the data only, in a manner similar to that described above with reference to figures 5 and 6 .
[0215] In some examples, one or more branches may include a first branch and a second branch. In these examples, to select the first value in case the first value is within the range of values, the entropy encoding unit 56 and/or the entropy decoding unit 80 can select the first value if the first value is greater to a lower limit value plus the first offset and lower than an upper limit value minus the second offset. In some examples, the first offset may be the same as the second offset. In other examples, the first offset may be different from the second offset.
[0216] In other additional examples, the entropy coding unit 56 and/or the entropy decoding unit 80 can further update the context probability of the context adaptive entropy coding process based on one or more offsets, such as already described above.
[0217] Thus, the method of Figure 7 represents an example of a context adaptive entropy encoding method, the method comprising the determination of a first value, in case the first value is within a range of values defined by a lower limit, an upper limit and one or more deviations from one or more of the lower limit and upper limit, selecting the first value, in case the first value is outside the range of values, selecting a second value, where the second value is different from the first value, and initializing a probability of a context from the context adaptive entropy encoding process based on the selected first or second value.
[0218] Figure 8 is a flowchart illustrating an illustrative method of initializing one or more contexts of a context adaptive entropy encoding process based on the reference context state and quantization parameter values and one or more relationships , consistent with the techniques in this description. In other words, the techniques of the illustrative method of Figure 8 include determining an initial context state for each of the one or more contexts of a context adaptive entropy encoding process used to encode the video data using three or more sets of values, each including a reference context state value and a corresponding reference quantize parameter value, and one or more relationships.
[0219] As an example, to encode a block of video data, or other types of data, using a context-adaptive entropy encoding process (eg, a CABAC process) as described above, the encoding unit per entropy 56 and/or the entropy decoding unit 80 can determine an initial context state to initialize a context of a context adaptive entropy encoding process used to encode the video data based on an initialization parameter defining three or more reference context states, each corresponding to a respective value among the three or more reference quantization parameter values and a quantization parameter value associated with the video data (800). For example, the three or more reference context states and the corresponding three or more reference quantization parameter values can be three or more predetermined sets, or "pairs" of values, each including a context state value of reference and a corresponding reference quantization parameter value. As an example, the reference context state value of each pair can be derived prior to performing step (800) using the corresponding reference quantization parameter value of the respective pair, and one or more relationships. In some examples, the one or more relationships may include, for example, the linear relationship described above with reference to H.264/AVC and certain draft versions of HEVC, the relationship used in the illustrative method of Figure 6 which takes into account a temporal layer associated with the video data, or any other relationship used to determine a context state for a context based on a quantize parameter value. Additionally, the quantization parameter value associated with the video data can be a quantization parameter value associated with one or more frames, slices, blocks or other parts of the video data.
[0220] Thus, an initial probability of the context, as indicated by the initial context state, can be relatively more accurate compared to an initial probability of a context determined using other techniques, for example, the techniques that determine the states Initial context values for the contexts of a context adaptive entropy encoding process used to encode the video data using a linear relationship defined by the slope and intersection values, and by a quantization parameter associated with the video data. An example of such a linear relationship has been described above with reference to H.264/AVC and certain draft versions of HEVC. In accordance with the techniques of the illustrative method of Figure 8, the initial probability of the context can be derived using an initialization parameter and a quantization parameter value associated with the video data along with several (e.g., linear and non-linear) interpolation techniques, which can result in an initial probability of the context being relatively more accurate.
[0221] Subsequently, the entropy encoding unit 56 and/or the entropy decoding unit 80 can initialize the context of the context adaptive entropy encoding process based on the initial context state (802), in a similar way to that described above with respect to Figures 5 and 6. Additionally, in some examples, the entropy encoding unit 56 and/or the entropy decoding unit 80 can further entropy encode the video data based on the initialized context of the process adaptive entropy coding process (804) and, in some cases, update a context state of the initialized contexts of the context adaptive entropy coding process based on the video data (806), also in a similar way as described above with reference to figures 5 and 6.
[0222] In some examples, to determine the initial context state based on the initialization parameter and quantization parameter value associated with the video data, the entropy encoding unit 56 and/or the entropy decoding unit 80 may linearly interpolate between three or more reference context states and the corresponding three or more reference quantize parameter values using the quantize parameter value associated with the video data. For example, the entropy encoding unit 56 and/or the entropy decoding unit 80 can use linear or partially linear interpolation techniques to determine the initial context state based on the initialization parameter and associated quantization parameter value with the video data.
[0223] In other examples, to determine the initial context state based on the initialization parameter and the quantization parameter value associated with the video data, the entropy encoding unit 56 and/or the decoding unit per entropy 80 can fit a curve between three or more reference context states and the corresponding three or more reference quantization parameter values, and interpolate between the three or more reference context states and the three or more reference context parameter values. corresponding reference quantization using the fitted curve and the quantization parameter value associated with the video data. For example, the entropy coding unit 56 and/or the entropy decoding unit 80 can use interpolation based on join, bilinear, or any other non-linear or partially non-linear interpolation technique to determine the state of context. based on the initialization parameter and the quantize parameter value associated with the video data.
[0224] Additionally, in other examples, each of the three or more reference quantization parameter values may be offset with respect to each other of the three or more reference quantization parameter values by a value that is an integer multiple of "two". For example, as described above, a first reference quantization parameter value, for example, "QP1", can be equal to "26", a second reference quantization parameter value, for example, "QP2", can be equal to "QP1-8", or "18", and a third reference quantization parameter value, "QP3", may be equal to "QP1+8", or "34". In this example, each of the reference quantization parameter values QP1, QP2 and QP3 are offsets from each other by a value that is an integer multiple of "2". In this case, "8". In other examples, each of the three or more reference quantization parameter values may be offset relative to one another by any other value, including any other value that is an integer multiple of "2".
[0225] Thus, the method of Figure 8 represents an example of a context adaptive entropy encoding method, the method comprising determining an initial context state for initializing a context of a process adaptive entropy encoding process. context used to encode video data based on an initialization parameter that defines three or more reference context states, each corresponding to a respective value among the three or more reference quantize parameter values, and a value of quantization parameter associated with the video data, and initializing the context of the context adaptive entropy encoding process based on the initial context state.
[0226] In one or more examples, the functions described can be implemented in hardware, software, firmware or any combination thereof. If implemented in software, functions can be stored in or transmitted as one or more instructions or code, a computer-readable medium, and executed by a hardware-based processing unit. The computer-readable medium may include computer-readable storage medium, which corresponds to a tangible medium such as a data storage medium, or communication medium including any medium that facilitates the transfer of a computer program from one place to another, for example, according to a communication protocol. Thus, the computer readable medium can generally correspond to (1) tangible computer readable storage medium that is not transient or (2) a communication medium such as a signal or carrier wave. The data storage medium may be any available medium that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementing techniques described in this description. A computer program product may include a computer-readable medium.
[0227] By way of example, and not limitation, such computer-readable storage medium may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, or other magnetic storage devices, flash memory, or any other medium that can be used to store the desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly called a computer-readable medium. For example, if instructions are transmitted from a web site, server, or other remote source using coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio and microwave, so coaxial cable, fiber optic cable, twisted pair, DSL or wireless technologies such as infrared, radio and microwave are included in the definition of medium. It should be understood, however, that computer-readable storage media and data storage media do not include connections, carrier waves, signals or other transient media, but are instead directed to tangible storage media and not transient. Floppy disk and disk, as used herein, include compact disk (CD), laser disk, optical disk, digital versatile disk (DVD), floppy disk, and Blu-ray disk, where floppy disks normally reproduce data magnetically, while disks reproduce data optically. with lasers. Combinations of the above should also be included in the scope of computer readable medium.
[0228] Instructions can be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application-specific integrated circuits (ASICs), field-programmable logic assemblies (FPGAs), or another set of equivalent discrete or integrated logic circuit. Accordingly, the term "processor" as used herein may refer to any of the structures mentioned above or any other structure suitable for implementing the techniques described herein. Additionally, in some respects, the functionality described here may be provided within dedicated hardware and/or software modules configured to encode and decode, or incorporated into a combined codec. Furthermore, the techniques can be fully implemented in one or more circuits or logic elements.
[0229] The techniques in this description can be implemented in a wide variety of devices or appliances, including a wireless device, an integrated circuit (IC) or a set of ICs (eg, a set of chips). Various components, modules, or units are described in this description to emphasize the functional aspects of devices configured to perform the techniques described, but do not necessarily require them to be performed by different hardware units. Instead, as described above, multiple units can be combined into one hardware codec unit or provided by a collection of interoperable hardware units, including one or more processors as described above, together with appropriate software and/or firmware.
[0230] Several examples have been described. These and other examples are within the scope of the following claims.
权利要求:
Claims (9)
[0001]
1. A context initialization method used to encode video data in a context adaptive binary arithmetic encoding (CABAC) process, the method characterized by the fact that it comprises: determining a first 4-bit initialization parameter index value , m, such as x>>4, where x is an 8-bit parameter, and where “>>” indicates a right shift operation; determine a second 4-bit initialization parameter index value, n, such as x&15, where x is the same 8-bit parameter, and “&” indicates a logical AND operation; determining a slope value and an intersection value from the corresponding first and second initialization parameter index values, wherein determining the slope value and the intersection value comprises calculating the slope value and the intersection value a starting from the first and second initialization parameter index values using one or more formulas; and initialize the context state used to encode video data in the context adaptive binary arithmetic coding (CABAC) process using the determined slope value and intersection value, where a sufficiently large range of slope and intersection values to initialize the context state is covered by using m to represent a slope index value and n to represent an intersection index value.
[0002]
2. Method according to claim 1, characterized in that it further comprises encoding the video data using the CABAC process.
[0003]
3. Method according to claim 1, characterized in that it further comprises decoding the video data using the CABAC process.
[0004]
4. Method according to claim 1, characterized in that each of one or more formulas is implemented using only one or more operations selected from a group consisting of: a bit shift operation; a sum operation; a subtraction operation; a multiplication operation; and a split operation.
[0005]
5. Device for initializing a context used to encode video data in a context adaptive binary arithmetic encoding (CABAC) process, the device characterized in that it comprises: means for determining a first initialization parameter index value of 4-bit, m, such as x>>4, where x is an 8-bit parameter, and where “>>” indicates a right shift operation; means for determining a second 4-bit initialization parameter index value, n, such as x&15, where x is the same 8-bit parameter, and “&” indicates a logical AND operation; means for determining a slope value and an intersection value from the corresponding first and second initialization parameter index values, wherein determining the slope value and the intersection value comprises calculating the slope value and the value of intersection from the first and second initialization parameter index values using one or more formulas; and means for initializing the context state used to encode video data in the context adaptive binary arithmetic coding (CABAC) process using the determined slope value and intersection value, wherein a sufficiently large range of slope and intersection values to initialize the context state is covered by using m to represent a slope index value and n to represent an intersection index value
[0006]
6. Device according to claim 5, characterized in that it further comprises means for encoding the video data using the CABAC process.
[0007]
7. Device according to claim 5, characterized in that it further comprises means to decode the video data using the CABAC process.
[0008]
8. Device according to claim 5, characterized in that each one of one or more formulas is implemented using only one or more operations selected from a group consisting of: a bit shift operation; a sum operation; a subtraction operation; a multiplication operation; and a split operation.
[0009]
9. Computer readable memory characterized in that it comprises instructions stored therein, the instructions being computer executable to carry out the method steps as defined in any one of claims 1 to 4.
类似技术:
公开号 | 公开日 | 专利标题
BR112014010052B1|2021-07-20|CONTEXT INITIALIZATION METHOD AND DEVICE USED TO ENCODE VIDEO DATA IN A BINARY ARITHMETIC ENCODING PROCESS ADAPTIVE TO THE CONTEXT, AND COMPUTER-READABLE MEMORY
US9503715B2|2016-11-22|Constrained intra prediction in video coding
JP6046164B2|2016-12-14|Context determination for coding transform coefficient data in video coding
KR102283307B1|2021-07-28|Rice parameter update for coefficient level coding in video coding process
KR102315238B1|2021-10-19|Rice parameter initialization for coefficient level coding in video coding process
JP2018533261A|2018-11-08|Image prediction method and apparatus
BR122020003135B1|2021-07-06|METHOD AND DEVICE FOR DECODING VIDEO DATA AND COMPUTER-READABLE NON- TRANSIENT STORAGE MEDIA
JP6231109B2|2017-11-15|Context derivation for context-adaptive, multilevel significance coding
JP2016511975A|2016-04-21|Simplified mode decision for intra prediction
BR112014011060B1|2021-07-13|CONTEXT REDUCTION NUMBER FOR ADAPTIVE BINARY ARITHMETIC CONTEXT CODING
JP6527877B2|2019-06-05|Coefficient level coding in video coding process
JP2017225132A|2017-12-21|Context adaptive entropy coding with reduced initialization value set
US20130114691A1|2013-05-09|Adaptive initialization for context adaptive entropy coding
BR112020006572A2|2020-10-06|binary arithmetic coding with progressive modification of adaptation parameters
BR112021003315A2|2021-05-11|reduction from regular coded compartment to coefficient coding using limit and rice parameter
BR112021011060A2|2021-08-31|ESCAPE CODING FOR COEFFICIENT LEVELS
BR112014011065B1|2021-10-05|CONTEXT REDUCTION NUMBER FOR ADAPTIVE BINARY ARITHMETIC CODING TO THE CONTEXT
同族专利:
公开号 | 公开日
EP2774272B1|2018-10-17|
BR112014010052A2|2017-06-13|
CN103975532B|2017-08-25|
US9484952B2|2016-11-01|
DK2774272T3|2019-01-28|
PH12014500984A1|2014-10-20|
ES2705746T3|2019-03-26|
BR112014010052A8|2017-06-20|
PT2774272T|2019-01-24|
HUE041771T2|2019-05-28|
IL232156D0|2014-05-28|
JP5882489B2|2016-03-09|
EP2774272A1|2014-09-10|
SG11201401486SA|2014-09-26|
JP2014535222A|2014-12-25|
KR101638712B1|2016-07-11|
RU2014122321A|2015-12-10|
PH12014500984B1|2014-10-20|
WO2013067186A1|2013-05-10|
SI2774272T1|2019-02-28|
TWI527464B|2016-03-21|
US20130114675A1|2013-05-09|
CA2853808A1|2013-05-10|
CN103975532A|2014-08-06|
PL2774272T3|2019-05-31|
AU2012332444B2|2015-08-13|
IL232156A|2017-09-28|
RU2576587C2|2016-03-10|
TW201338548A|2013-09-16|
CA2853808C|2017-06-13|
AU2012332444A1|2014-05-22|
KR20140093698A|2014-07-28|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

US6894628B2|2003-07-17|2005-05-17|Fraunhofer-Gesellschaft Zur Forderung Der Angewandten Forschung E.V.|Apparatus and methods for entropy-encoding or entropy-decoding using an initialization of context variables|
RU2336661C2|2005-04-19|2008-10-20|Самсунг Электроникс Ко., Лтд.|Method and device of adaptive choice of context model for entropy encoding|
US7221296B2|2005-08-22|2007-05-22|Streaming Networks Ltd.|Method and system for fast context based adaptive binary arithmetic coding|
KR100873636B1|2005-11-14|2008-12-12|삼성전자주식회사|Method and apparatus for encoding/decoding image using single coding mode|
EP2171856A4|2007-07-19|2010-08-04|Research In Motion Ltd|Method and system for reducing contexts for context based compression systems|
US7535387B1|2007-09-10|2009-05-19|Xilinx, Inc.|Methods and systems for implementing context adaptive binary arithmetic coding|
US7557740B1|2008-04-18|2009-07-07|Realtek Semiconductor Corp.|Context-based adaptive binary arithmetic coding decoding apparatus and decoding method thereof|
WO2010021699A1|2008-08-19|2010-02-25|Thomson Licensing|Context-based adaptive binary arithmetic coding video stream compliance|
US7932843B2|2008-10-17|2011-04-26|Texas Instruments Incorporated|Parallel CABAC decoding for video decompression|
US7982641B1|2008-11-06|2011-07-19|Marvell International Ltd.|Context-based adaptive binary arithmetic coding engine|
CN103119849B|2010-04-13|2017-06-16|弗劳恩霍夫应用研究促进协会|Probability interval partition encoding device and decoder|
PT3471415T|2011-06-16|2021-11-04|Ge Video Compression Llc|Entropy coding of motion vector differences|KR20140057533A|2011-07-15|2014-05-13|텔레폰악티에볼라겟엘엠에릭슨|An encoder and method thereof for assigning a lowest layer identity to clean random access pictures|
US10116951B2|2011-11-07|2018-10-30|Sharp Laboratories Of America, Inc.|Video decoder with constrained dynamic range|
US9167261B2|2011-11-07|2015-10-20|Sharp Laboratories Of America, Inc.|Video decoder with constrained dynamic range|
WO2014000160A1|2012-06-26|2014-01-03|Intel Corporation|Inter-layer coding unit quadtree pattern prediction|
US9992490B2|2012-09-26|2018-06-05|Sony Corporation|Video parameter setsyntax re-ordering for easy access of extension parameters|
KR101391693B1|2012-11-14|2014-05-07|주식회사 엘지화학|Elastic terpolymer and preparation method thereof|
US20150016509A1|2013-07-09|2015-01-15|Magnum Semiconductor, Inc.|Apparatuses and methods for adjusting a quantization parameter to improve subjective quality|
WO2015054813A1|2013-10-14|2015-04-23|Microsoft Technology Licensing, Llc|Encoder-side options for intra block copy prediction mode for video and image coding|
RU2666635C2|2013-10-14|2018-09-11|МАЙКРОСОФТ ТЕКНОЛОДЖИ ЛАЙСЕНСИНГ, ЭлЭлСи|Features of base colour index map mode for video and image coding and decoding|
CN105765974B|2013-10-14|2019-07-02|微软技术许可有限责任公司|Feature for the intra block of video and image coding and decoding duplication prediction mode|
US10390034B2|2014-01-03|2019-08-20|Microsoft Technology Licensing, Llc|Innovations in block vector prediction and estimation of reconstructed sample values within an overlap area|
EP3090553A4|2014-01-03|2017-12-20|Microsoft Technology Licensing, LLC|Block vector prediction in video and image coding/decoding|
US10542274B2|2014-02-21|2020-01-21|Microsoft Technology Licensing, Llc|Dictionary encoding and decoding of screen content|
KR101983441B1|2014-05-28|2019-08-28|애리스 엔터프라이지즈 엘엘씨|Acceleration of context adaptive binary arithmetic codingin video codecs|
WO2015192353A1|2014-06-19|2015-12-23|Microsoft Technology Licensing, Llc|Unified intra block copy and inter prediction modes|
CN105874795B|2014-09-30|2019-11-29|微软技术许可有限责任公司|When wavefront parallel processing is activated to the rule of intra-picture prediction mode|
US10841586B2|2014-11-20|2020-11-17|LogMeln, Inc.|Processing partially masked video content|
JP2016218721A|2015-05-20|2016-12-22|ソニー株式会社|Memory control circuit and memory control method|
US10148961B2|2015-05-29|2018-12-04|Qualcomm Incorporated|Arithmetic coder with multiple window sizes|
CN106664405B|2015-06-09|2020-06-09|微软技术许可有限责任公司|Robust encoding/decoding of escape-coded pixels with palette mode|
CN106303550B|2015-06-11|2019-06-21|华为技术有限公司|Block-eliminating effect filtering method and block elimination effect filter|
US10616582B2|2016-09-30|2020-04-07|Qualcomm Incorporated|Memory and bandwidth reduction of stored data in image/video coding|
US10757412B2|2017-01-03|2020-08-25|Avago Technologies International Sales Pte. Limited|Architecture flexible binary arithmetic coding system|
CN107580224B|2017-08-08|2019-11-22|西安理工大学|A kind of adaptive scanning method towards HEVC entropy coding|
US11039143B2|2017-11-20|2021-06-15|Qualcomm Incorporated|Memory reduction for context initialization with temporal prediction|
US10986349B2|2017-12-29|2021-04-20|Microsoft Technology Licensing, Llc|Constraints on locations of reference blocks for intra block copy prediction|
US10986354B2|2018-04-16|2021-04-20|Panasonic Intellectual Property Corporation Of America|Encoder, decoder, encoding method, and decoding method|
US11178399B2|2019-03-12|2021-11-16|Qualcomm Incorporated|Probability initialization for video coding|
法律状态:
2018-03-27| B15K| Others concerning applications: alteration of classification|Ipc: H03M 7/40 (2006.01), H04N 19/13 (2014.01) |
2018-12-04| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]|
2019-11-05| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]|
2021-05-25| B09A| Decision: intention to grant [chapter 9.1 patent gazette]|
2021-06-01| B350| Update of information on the portal [chapter 15.35 patent gazette]|
2021-07-20| B16A| Patent or certificate of addition of invention granted|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 01/11/2012, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
申请号 | 申请日 | 专利标题
US201161555469P| true| 2011-11-03|2011-11-03|
US61/555,469|2011-11-03|
US201161556808P| true| 2011-11-07|2011-11-07|
US61/556,808|2011-11-07|
US201161557785P| true| 2011-11-09|2011-11-09|
US61/557,785|2011-11-09|
US201161560107P| true| 2011-11-15|2011-11-15|
US61/560,107|2011-11-15|
US13/665,467|US9484952B2|2011-11-03|2012-10-31|Context state and probability initialization for context adaptive entropy coding|
US13/665,467|2012-10-31|
PCT/US2012/063070|WO2013067186A1|2011-11-03|2012-11-01|Context state and probability initialization for context adaptive entropy coding|
[返回顶部]