专利摘要:
video quality estimator, video quality estimation method, and program. The present invention provides a video quality estimator (1) which includes a packet analysis unit (10) which derives the bit rate of an encoded video packet contained in an input packet, and the amount of bits of the encoded video packet for each encoded video frame type, a video subset frame characteristic estimation unit (11) which derives the frame characteristic of each type of video frame from the bit rate derived by the packet analysis unit (10), and an encoding quality estimation unit (12) that derives, based on the bit rate and bit quantity of each type of video frame, a quality value of video that quantitatively represents the quality of encoded video data that is affected by encoding degradation. the video quality estimator performs a more accurate video quality estimation taking into account the number of bits of each type of video frame.
公开号:BR112012008605B1
申请号:R112012008605-7
申请日:2010-03-25
公开日:2021-09-08
发明作者:Kazuhisa Yamagishi;Takanori Hayashi;Jun Okamoto
申请人:Nippon Telegraph And Telephone Corporation;
IPC主号:
专利说明:

Field of Technique
[0001] The present invention relates to a video quality estimating apparatus, video quality estimating method and program, and more particularly to a video quality estimating apparatus, video quality estimating method, and a program for estimating the video quality of a video encoded in an IPTV service, video distribution service, videophone service, or the like, provided over an IP network, such as the Internet. Background of the Invention
[0002] As Internet access lines are growing in speed and bandwidth, it is expected that video communication services will be more popular, which transfer video media that includes video and audio between terminals or between servers and terminals via the Internet.
[0003] The internet is a network that does not necessarily guarantee the quality of communication. When communicating using audio and video media, the bit rate drops if the network line bandwidth is constrained between user terminals, or a packet loss or packet transfer delay occurs if line congestion occurs. This leads to poor quality of audio and video media captured by users (QoE: (Quality of Experience)).
[0004] More specifically, when video is encoded, a block of video signals within a frame may degrade, or the high frequency component of a video signal may be lost, impairing the resolution of the entire video. When encoded video content is packaged and transmitted from a provider, packet loss or packet transfer delay occurs within the network or terminal equipment, and the video undergoes unexpected degradation.
[0005] As a result, the user notices a blur, blur, mosaic-like distortion or freeze (state in which a video frame stops) or jump (state in which several frames are lost from the video frames) of video frames.
[0006] To confirm that video communication services as mentioned above are provided with high quality, it is important to measure the QoE of a video and manage the quality of the video to be provided to the user while providing services.
[0007] Therefore, a video quality evaluation technique capable of properly representing the QoE of a video is required.
[0008] As conventional methods to assess the qualities of videos and audios, there are a subjective quality assessment method (non-patent literature 1) and an objective quality assessment method (non-patent literature 2).
[0009] In the subjective quality assessment method, a plurality of users actually see the videos and hear the audios, and assess the QoE using a five-degree quality scale (excellent, good, moderate, poor and poor) (9 or 11 degrees are also available) or a scale of disability (noticeable, noticeable but not uncomfortable, slightly uncomfortable, uncomfortable, and very uncomfortable). The video or audio quality evaluation values under each condition (eg 0% packet loss rate and 20 Mbps bit rate) are measured by the total number of users. The average value is defined as an MOS value (Average Opinion Score) or a DMOS value (Average Degradation Opinion Score).
[00010] However, the subjective quality assessment method requires special dedicated equipment (eg monitor) and an assessment facility capable of adjusting the assessment environment (eg ambient illuminance and ambient noise). Also, many users really need to rate the videos or audios. Since time is adopted until completion of the actual assessment through users, the subjective quality assessment method is not suitable for assessing quality in real time.
[00011] This increases a demand to develop an objective quality assessment method of outputting a video quality assessment value that uses an amount of resource (eg bit rate, amount of bits per frame or loss of information package) that affects the video quality.
[00012] A conventional objective quality assessment method detects the quality degradation caused by encoding a video, and estimates the individual video quality value or average video quality value of the video (Non-patent Literature 2).
[00013] The individual video quality value is the quality assessment value of each video content to be estimated, and is defined by a value from 1 to 5 (in some cases defined by another range, for example from 1 to 9 or 0 to 100). The average video quality value is a value obtained by dividing the sum of the individual video quality values of the respective video contents to be estimated by the total number of video contents to be estimated, and is defined by a value of 1 to 5 (in some cases defined by another range, for example 1 to 9 or 0 to 100).
[00014] For example, when the number of videos (a plurality of streamed videos will be called a "subset of video") streamed under the same condition (0% packet loss rate and 20 Mbps bit rate) in an arbitrary video content (video set) is eight, the quality assessment values of the respective eight videos contained in the video subset will be individual video quality values, and a value obtained by dividing the sum of the quality values of individual video from the video subset by eight, which is the number of videos contained in the video subset, will be the average video quality value.
[00015] Figure 8 is a view that conceptually explains the relationship between the video set and the video subset. As shown in Figure 8, video subset means a specific video set used for video quality assessment out of a video set that serves as a set that contains an infinite number of videos, that is, a set of arbitrary videos .
[00016] There is also a known conventional objective quality assessment method to detect quality degradation caused by video encoding degradation or packet loss, and estimate the video quality assessment value of the video (non-patent literature 3 and patent literature 1). The video quality assessment value indicates the quality assessment value of each video content to be estimated, and is defined by a value from 1 to 5 (as described in the description of the subjective quality assessment method, the assessment of grade 9 or 11 can be adopted, and the quality assessment value can be designated by another range, for example, from 1 to 9 or 0 to 100).
[00017] As described above, more conventional subjective quality assessment methods estimate a video quality assessment value using video packets or signals (pixel values). Non-patent Literature 2 and Patent Literature 1 describe techniques for estimating a video quality assessment value from packet header information only. Non-patent literature 3 describes a technique for estimating a video quality assessment value from video signals.
[00018] The relationship between the video frame type and the GoP (Picture Group) structure of an encoded video when transmitting compressed video frames, and the relationship between the video frame type and the quality evaluation value of an encoded video will be explained. <Video Frame Type>
[00019] Compressed video frames are classified into three types: I frame (Intraframe), P frame (Predictive frame), and B frame (Bidirectional frame).
[00020] The I frame is a frame that is independently encoded without taking into account the preceding and subsequent frames. Frame P is a frame that is predicted from a past frame within consecutive frames, ie, encoded by forward prediction. Frame B is a frame that is encoded by predicting past and future frames in two directions. <Relationship between GoP Structure and Video Frame Type>
[00021] The GoP structure of an encoded video represents the interval in which the video frames of the respective video frame types are arranged.
[00022] For example, Figure 24 is a view that conceptually explains a GoP structure represented by M = 3 and N = 15 (M is an interval that corresponds to the number of frames in unidirectional prediction, and N is the interval between I frames ).
[00023] In an encoded video that has the GoP structure, as shown in Figure 24, two B frames are inserted between an I frame and a P frame and between P frames, and the interval between I frames is 15 frames. <Bit Quantities of Respective Video Frame Types>
[00024] Bit amounts of compressed video frames of the respective video frame types will be explained.
[00025] The video frame bit counts of the respective video frame types are defined as the I frame bit count (BitsI), the P frame bit count (BitsP) and the B frame bit count (BitsB). The bit amounts of the respective video frame types are indices indicating the bit amounts used for the respective video frame types (I, B, and P frames), for example, when encoding video content from 10 seconds to be evaluated.
[00026] More specifically, when a 10-second video content is encoded at 30 fps (frames/second), the total number of video frames of an encoded video is 300, and there are 20 I-frames in every 300 frames. Assuming that the amount of bits needed to encode the 20 I-frames is 10,000 bits, the amount of bits in the I-frame is 500 bits/I-frame from 10,000 bits/20 I-frames.
[00027] Similarly, 80 P frames exist in all 300 frames. Assuming the amount of bits needed to encode the 80 P-frames is 8000 bits, the amount of bits in the P frame will be 100 bits/P frame from 8000 bits/80 P-frames. Also, there are 200 B-frames in all the 300 frames. Assuming that the amount of bits needed to encode the 200 B-frames is 10,000 bits, the amount of bits in the B-frame will be 50 bits/B-frame (10,000 bits/200 B-frames).
[00028] At this time, the amount of 28,000 bits is needed to encode the 10 second video content (300 frames in total) so that the bit rate is 2800 b/s (2.8 kbps) from 28,000 bits/10 seconds. <Bit Amount Characteristics for Respective Video Frame Type>
[00029] The maximum frame bit quantity, minimum frame bit quantity and average frame bit quantity which indicate the bit quantity characteristics for the respective video frame types will be defined and explained.
[00030] The maximum frame bit quantity value is defined as the frame maximum bit quantity, the minimum value is defined as the frame minimum bit quantity, and the average value is defined as the frame average bit quantity with respect to bit rate (BR) or the number of lost video frames (DF) in a plurality of video contents (eg a video set of eight video contents). In correspondence with the respective video frame types, these values are represented by the maximum bit quantity of the I frame (BitsImax), minimum bit quantity of the I frame (BitsImin), average bit quantity of the I frame (BitsIave), quantity Maximum P-frame bits (BitsPmax), Minimum P-frame bits (BitsPmin), Average P-frame bits (BitsPave), Maximum B-frame bits (BitsBmax), Minimum B-frame bits (BitsBmin) and average B frame bit quantity (BitsBave).
[00031] For example, the bit amounts of the I-frames of the eight video contents encoded at the same bit rate are "450 bits", "460 bits", "470 bits", "480 bits", "490 bits", "500 bits", "510 bits" and "520 bits", respectively. In this case, since the maximum value of the amount of bits of the I frame is "520 bits", the maximum amount of bits of the I frame is "520". Since the minimum I-frame bit value is "450 bits", the minimum I-frame bit quantity is "450". Since the average value of the amount of bits of the I frame is "485 bits", the average amount of bits of the I frame is "485".
[00032] Regarding the maximum frame bits, minimum frame bits and average frame bits quantity of frames B and P, the maximum values, minimum values and the average values of frame bit quantities of the respective types Video frame bits are defined as the maximum frame bits, minimum frame bits, and average frame bits versus bit rate (BR) or number of video frames lost (DF) in a plurality of video content. <Bit Quantities of Respective Video Frame Types and Influence on Video Quality>
[00033] The influence of the bit amounts assigned to the respective video frame types on the video quality encoding will be explained with reference to the attached drawings.
[00034] Figures 9A to 9C are graphs showing the bit amounts of the respective video frame types (I, P and B frames) of a video that is subjected to video quality estimation that are represented along the abscissa , and the video quality values of the respective video contents which are represented along the ordinate when the video contents for the predetermined seconds are encoded at the same bit rate (in this example 10 second video contents at 10 Mbps with 300 frames of video).
[00035] As shown in Figures 9A to 9C, the relationship between the bit amounts of the respective video frame types and the video quality evaluation value represents that, as a result of the comparison at the same bit rate, a content video that has a small amount of I-frame bits exhibits a low video quality assessment value, and video content that has a large amount of I-frame bits exhibits a high video quality assessment value. The result of the comparison at the same bitrate for the amount of bits of P and B frames reveals that the video contents that have small amounts of bits of P and B frames exhibit high video quality evaluation values and the video contents that have large amounts of PB frame bits exhibit low video quality evaluation values.
[00036] Even for videos that have the same bit rate, the bit amounts of the respective video frame types affect the video quality. <Relationship between Bit Quantity Characteristics of the Respective Video Frame Types and Video Quality>
[00037] Figures 10A and 10B are graphs that conceptually show the relationship between the bit rate of each video in a video subset and the frame bit amounts of the respective video frame types. The relationship between the bit rate and the bit rate of the P frame shown in Figure 10B is similar to that between the bit rate and the bit rate of the B frame, so the relationship between the bit rate and the bit rate. bits of frame B will not be illustrated.
[00038] As shown in Figures 10A and 10B, depending on the video, the amount of frame bits has different characteristics even in videos that have the same bit rate. More specifically, even if the videos have the same bitrate, the relationship between the maximum frame bit quantity, the minimum frame bit quantity and the average frame bit quantity differs between the respective video frame types.
[00039] The relationship between the bit rate of a video and the frame bit amounts of the respective video frame types affects the video quality. Video quality differs even among videos that have the same bit rate.
[00040] Figure 11 is a graph that conceptually explains the aforementioned influence of the bit amounts of the respective video frame types on video quality.
[00041] Figure 11 shows the relationship between bit rate and video quality value. In Figure 11, circles, triangles, and squares respectively represent a maximum video quality value (Vqmax) that is maximum among the video quality values of videos that have the same bitrate outside the videos in a video subset, a minimum video quality value (Vqmin) which is minimum, and an average video quality value (Vqave) which is a value obtained by dividing the sum of the video quality values by the number of videos.
[00042] As shown in Figure 11, the video quality value has a difference between the maximum video quality value and the minimum video quality value even in videos that have the same bitrate. That is, the video quality value of a video to be estimated does not always match the average video quality value of a video that has the same bit rate as the video to be estimated. The difference between the video quality value and the average video quality value depends on the bit amounts assigned to the respective video frame types of the video to be estimated. This difference between the video quality value and the average video quality value is defined as a difference video quality value (dVq).
[00043] Therefore, the difference video quality value (dVq) is generated in videos that have the same bit rate, depending on the relationship between the bit rate of a target video and the characteristics of bit amounts assigned to the respective ones. video frame types. Prior Art Literatures Patent Literature
[00044] Patent Literature 1: Japanese patent open to public inspection number 2006-033722 Non-Patent Literatures 1
[00045] Non-patent Literature 1: ITU-T Recommendation P.910
[00046] Non-patent Literature 2: K. Yamagishi and T. Hayashi, "Parametric Packet-Layer Model for Monitoring Video Quality of IPTV Services", IEEE ICC 2008, CQ04-3, May 2008
[00047] Non-patent Literature 3: ITU-T Recommendation J.247
[00048] Non-Patent Literature 4: DVD Document A001 Rev. 7 (Specification for the use of Video and Audio Coding in Broadcasting Applications based on the MPEG-2 Transport Stream)
[00049] Non-patent Literature 5: ANSI/SCTE 128 2008 (AVC Video System and Transport Constraints for Cable Television)
[00050] Non-patent Literature 6: Ushiki and Hayashi, "Computational Cost Reduction of Picture Type Estimation Method using TS Header Information", IEICE Technical Report, CQ2008-32, September 2008 Invention Summary Problem to be Solved by the Invention
[00051] However, the techniques described in patent literature 1 and non-patent literature 2 estimate video quality based on bit rate degradation or packet loss information of a video. These techniques do not estimate video quality in consideration of the bit amounts assigned to the respective video frame types.
[00052] The video quality value is affected by the bit amounts of the respective video frame types (I, P and B frames). Therefore, as the first problem, conventional objective quality estimation methods as described in patent literature 1 and non-patent literature 2 can estimate the average video quality based on bit rate degradation and packet loss information. , however, cannot estimate the video quality value for each video content that has characteristics different from the bit amounts assigned to the respective video frame types.
[00053] The technique described in the non-patent literature 3 estimates a video quality evaluation value from the video signals (pixel values) viewed by a user. This technique can derive a video quality evaluation value for each video content, however, it uses source video signals free from any encoding degradation or packet loss. Therefore, as the second problem, it is difficult to employ this technique in an environment where it is difficult to obtain source video signals, especially in a user's home with video communication services.
[00054] When estimating a video quality evaluation value that uses video signals, arithmetic processing needs to be performed for all pixels that make up a video frame. Performing arithmetic processing for many pixels greatly increases the cost of arithmetic processing.
[00055] The present invention was made to solve the first and second problems, and aims to provide a video quality estimating device and video quality estimation method capable of estimating the video quality values of the respective contents even having the same bit rate in consideration of the bit amounts of the respective video frame types while eliminating the cost of arithmetic processing. Means to Solve the Problem
[00056] To achieve the above objectives, according to the present invention, a video quality estimating apparatus has been provided comprising a packet analysis unit that derives a bit rate of an input encoded video packet, and derives a quantity of bits of an encoded video for at least one type of video frame out of a plurality of video frame types, a frame characteristic estimating unit which derives a frame characteristic that represents a characteristic of the quantity. bits of each type of video frame from the bit rate derived by the packet analysis unit, and an encoding quality estimation unit that derives a video quality value based on the bit rate of the video packet encoded and the amount of bits of each type of video frame that was derived by the packet analysis unit, and the frame characteristic of each type of video frame that was derived by the frame characteristic estimation unit. Effects of the Invention
[00057] According to the present invention, the video quality value of each video of a video content in video communication services can be estimated based on a bit rate extracted from a packet, and on the amounts of bits derived for the respective video frame types after specifying the video frame types from the packet header information.
[00058] Furthermore, according to the present invention, the video quality value of each video in video communication services can be estimated based on the header information of an incoming packet in consideration of all extracted bit rates of the input packet, to the number of lost video frames and derived bit amounts for the respective video frame types after specifying the video frame types.
[00059] According to the present invention, the video communication service provider can easily monitor the video quality value of each video in video communication services that is actually viewed by the user. Therefore, the video communication service provider can easily determine whether a service provided maintains a predetermined or higher quality for the user.
[00060] The video communication service provider can capture and manage the real quality of a service which is provided in more detail than in the conventional technique.
[00061] According to the present invention, when deriving the video quality value of each video, no arithmetic processing needs to be performed for all the pixels that make up the video frame of the video. In other words, the video quality value can be derived by performing arithmetic processing for packet header information which is a relatively small amount of information. This can suppress arithmetic processing cost. Brief Description of Drawings
[00062] Figure 1 is a block diagram showing the arrangement of a quality of video estimating apparatus, according to the first embodiment of the present invention;
[00063] Figure 2 is a block diagram showing the arrangement of a packet analysis unit in the video quality estimating apparatus, according to the first embodiment of the present invention;
[00064] Figure 3 is a block diagram showing the arrangement of a frame feature estimating unit in the video quality estimating apparatus, according to the first embodiment of the present invention;
[00065] Figure 4 is a block diagram showing the arrangement of a coding quality estimating unit in the video quality estimating apparatus, according to the first embodiment of the present invention;
[00066] Figure 5 is a flowchart showing the operation of the video quality estimating apparatus, according to the first embodiment of the present invention;
[00067] Figure 6 is a table that exemplifies a quality characteristic coefficient database stored in the video quality estimating apparatus, according to the first embodiment of the present invention;
[00068] Figure 7 is a table that conceptually explains the extraction of a video frame start position;
[00069] Figure 8 is a view that conceptually explains the relationship between a video set and a video subset;
[00070] Figure 9A is a graph that conceptually explains the relationship between the amount of I-frame bits and the video quality value;
[00071] Figure 9B is a graph that conceptually explains the relationship between the amount of bits in the P frame and the video quality value;
[00072] Figure 9C is a graph that conceptually explains the relationship between the amount of bits of frame B and the value of video quality;
[00073] Figure 10A is a graph that conceptually explains the relationship between the bit rate and the average bit quantity of the I frame, the maximum bit quantity of the I frame and the minimum bit quantity of the I frame;
[00074] Figure 10B is a graph that conceptually explains the relationship between the bitrate and the average bit quantity of the P frame, the maximum bit quantity of the P frame and the minimum bit quantity of the P frame;
[00075] Figure 11 is a graph that conceptually explains the relationship between bit rate and average video quality value, maximum video quality value and minimum video quality value;
[00076] Figure 12 is a block diagram showing the arrangement of a video quality estimating apparatus, according to the second embodiment of the present invention;
[00077] Figure 13 is a block diagram showing the arrangement of a packet analysis unit in the video quality estimating apparatus, according to the second embodiment of the present invention;
[00078] Figure 14 is a block diagram showing the arrangement of a frame feature estimating unit in the video quality estimating apparatus, according to the second embodiment of the present invention;
[00079] Figure 15 is a block diagram showing the arrangement of a coding quality estimating unit in the video quality estimating apparatus, according to the second embodiment of the present invention;
[00080] Figure 16 is a block diagram showing the arrangement of a packet loss quality estimating unit in the video quality estimating apparatus, according to the second embodiment of the present invention;
[00081] Figure 17 is flowchart 1 showing the operation of the video quality estimating apparatus, according to the second embodiment of the present invention;
[00082] Figure 18 is flowchart 2 showing the operation of the video quality estimating apparatus, according to the second embodiment of the present invention;
[00083] Figure 19 is a table that exemplifies a quality characteristic coefficient database stored in the video quality estimating apparatus, according to the second embodiment of the present invention;
[00084] Figure 20 is a block diagram showing the arrangement of a video quality estimating apparatus, according to the third embodiment of the present invention;
[00085] Figure 21 is flowchart 1 showing the operation of the video quality estimating apparatus, according to the third embodiment of the present invention;
[00086] Figure 22 is flowchart 2 showing the operation of the video quality estimating apparatus, according to the third embodiment of the present invention;
[00087] Figure 23 is a table that exemplifies a quality characteristic coefficient database stored in the video quality estimating apparatus, according to the third embodiment of the present invention;
[00088] Figure 24 is a view that conceptually shows the GoP structure of an encoded video that explains, for the respective types of video frame, the propagation of degradation when a packet loss occurs in a video frame;
[00089] Figure 25A is a graph that conceptually explains the relationship between bit rate and I-frame bit quantities (average, maximum, and minimum);
[00090] Figure 25B is a graph that conceptually explains the relationship between the bit rate and the P-frame bit quantities (average, maximum and minimum);
[00091] Figure 25C is a graph that conceptually explains the relationship between the bit rate and the amounts of bits of frame B (average, maximum and minimum);
[00092] Figure 26 is a graph that conceptually explains the relationship between the bit rate and the average, maximum, and minimum encoded video quality assessment values;
[00093] Figure 27 is a graph that conceptually explains the relationship between the number of video frames lost and the average, maximum, and minimum video quality rating values;
[00094] Figure 28A is a graph that conceptually explains the relationship between the amount of I-frame bits and the video quality evaluation value when a packet loss occurs;
[00095] Figure 28B is a graph that conceptually explains the relationship between the amount of bits in the P frame and the video quality evaluation value when a packet loss occurs; and
[00096] Figure 28C is a graph that conceptually explains the relationship between the amount of bits in the B frame and the video quality evaluation value when a packet loss occurs. Best Way to Carry Out the Invention
[00097] Embodiments of the present invention will now be described with reference to the attached drawings. First Mode
[00098] A video quality estimator according to the first embodiment of the present invention implements objective video quality assessment by deriving a video quality value that quantitatively represents a video quality it uses the bit rate and bit amounts of the respective video frame types that affect video quality as it relates to video communication.
[00099] For example, in the modality, to implement objective video quality assessment in video communication, such as an IPTV service, video distribution service or videophone service provided over an IP network, such as, Internet, the video quality estimating device analyzes an encoded video packet contained in a packet, and derives a video quality value that quantitatively represents an amount of resource that affects the video quality that it refers to. these video communication services.
[000100] As shown in Figure 1, a video quality estimating apparatus 1, according to the modality, includes a packet analysis unit 10, frame characteristic estimating unit 11 and encoding quality estimating unit 12.
[000101] The packet analysis unit 10 includes a 10-1 bit rate calculation unit that derives the bit rate of an encoded video packet contained in an input packet, and a bit quantity calculation unit 10-2 which derives the bit amounts of the respective video frame types. The packet analysis unit 10 outputs the bit rate derived by the bit rate calculating unit 10-1, and the bit quantities of the respective video frame types derived by the bit quantity calculating unit 10-2.
[000102] The frame feature estimation unit 11 receives the bit rate output from the packet analysis unit 10, and derives and outputs frame features representing the bit quantity features of the respective frame types.
[000103] The encoding quality estimation unit 12 derives a video quality value based on the bit rate and bit amounts of the respective video types that were output from the packet analysis unit 10, and on the frame characteristics of the respective video frame types output from the frame characteristic estimator unit 11.
[000104] The construction components of the video quality estimating apparatus 1, according to the modality, will be explained in detail with reference to Figures 2 to 4.
[000105] As shown in Figure 2, the packet analysis unit 10 includes a video packet specification unit 101, encoding quantity calculation unit 102, frame delimiting position extraction unit 103, frame delimiting extraction unit specific frame start position 104, video frame bit amount calculating unit 105 and video frame type bit amount calculating unit 106. Bit rate calculating unit 10-1 is formed from of the video packet specification unit 101 and the encoding quantity calculating unit 102. The bit quantity calculating unit 10-2 is formed from the video packet specification unit 101, position extracting unit frame delimiter 103, specific frame start position extracting unit 104, video frame bit quantity calculating unit 105 and video frame type bit quantity calculating unit 106 .
[000106] The video packet specification unit 101 specifies an arbitrary encoded video packet contained in an input packet based on a unique packet ID (PID) for the encoded video packet.
[000107] An encoded video packet can be specified using, for example, a payload type in an RTP (Real Time Transport Protocol), PID in a TS (Transport Stream) packet, or stream ID in a PES (Packaged Elementary Flow) header. The video packet specification unit 101 can also have a function of extracting an RTP sequence number in the RTP packet and CC (Continuity Counter: 4-bit counter) in the TS packet.
[000108] The encoding amount calculation unit 102 derives a bit rate represented by the amount of bits of the encoded video packet per time unit that is specified by the video packet specification unit 101.
[000109] For example, the encoded data of a video or audio is identified by the PID described in a TS packet header. The encoding quantity calculation unit 102 counts the TS packets having the PID of video data, and multiplies the count by the data length (188 bytes, in general) of the TS packet to calculate a quantity of bits per unit of time. which thus derives the bit rate (BR).
[000110] The frame delimiting position extracting unit 103 derives information indicating the delimiter of a video frame from an encoded video packet specified by the video packet specification unit 101.
[000111] For example, the packet contains information such as an IP header, UDP (User Datagram Protocol) header, RTP, TS header, PES header and ES (Elemental Stream). Of these types of information, Payload_Unit_Start_Indicator (referred to as "PUSI") in the TS header is an indicator that indicates the presence/absence of the PES header. When a PES contains a frame (a video frame is often stored in a PES in the video encoding used in TV broadcasting), the PUSI serves as information indicating the beginning of a video frame. When extracting such information contained in a packet, the frame delimiting position extracting unit 103 derives information indicating the delimiter of a video frame.
[000112] Video frame home position derivation operation will be explained in detail with reference to Figure 7 which is a table that conceptually explains extracting a video frame home position.
[000113] As shown in Figure 7 (the left column represents the RTP sequence number, and the second to eighth columns from the left represent the CC numbers of TS), the TS containing PUSI indicates the starting position of a frame. Information indicating the delimiting position of a frame is sufficient to store the RTP sequence number of the starting position of a frame, the ordinal number of a packet counted from the beginning of an analysis section, or the ordinal number of the frame of a package that contains PUSI. Like the frame count method, PUSIs in the analysis section are counted.
[000114] When the PES header is usable, the PTS (Presentation Time Stamp) or DTS (Decoding Time Stamp) serves as information indicating the bounding position of a frame and thus the same processing as for the PUSI is executed. Similarly, when ES is usable, it stores the frame information so that the same processing as that for PUSI is performed.
[000115] The specific frame start position extracting unit 104 derives information indicating the start position of a specific video frame from an encoded video packet specified by the video packet specification unit 101.
[000116] The specific frame start position extraction unit 104 in the video quality estimating apparatus, according to the modality, is based on the fact that it derives information indicating the start positions of the "I frame", " P-frame" and "B-frame" when ES information is usable, and information indicating the starting position of the "I-frame" when no ES information is usable due to encryption.
[000117] When ES information is usable, a bit that indicates that the frame information exists in an H.264 or MPEG2 bit string (for example, this bit is Primary_pic_type or Slice_type for H.264). The frame delimiters of "I frame", "P frame" and "B frame" can be identified by the RTP sequence number of a packet containing the information, or the ordinal number of a packet counted from the beginning of a section of analysis.
[000118] When no ES information is usable, the information indicating the start position of an I frame is RAI (Random_Access_Indicator) or ESPI (Elementary_Stream_Priority_Indicator) in the TS header which serves as an indicator that indicates the start position of an I or I frame IDR (Instant Decoder Update) frame (see non-patent literatures 4 and 5).
[000119] Since RAI or ESPI serves as information indicating the starting position of an I frame or IDR frame, the delimiter of an I frame can be discriminated from those of other frames.
[000120] Even if neither RAI nor ESPI indicate the start position of an I frame or IDR frame, the specific frame start position extraction unit 104 specifies the position of an I frame by calculating the amount of data of each frame that uses PUSI representing a frame start position extracted by frame bounding position extracting unit 103.
[000121] More specifically, the amount of information of an I frame is greater than that of other video frames. Based on this basic feature, a video frame that has a large amount of data outside the video frames in a packet is specified as an I frame in conversion from GoP length (the number of frames between I frames).
[000122] For example, when the number of video frames in a packet is 300 and the GoP length is 15, the number of I frames will be 20. So 20 video frames each having a large amount of data outside the frames of video in a packet can be specified as I-frames.
[000123] To indicate the position of a specified I frame, an RTP sequence number at the start position of a frame or the ordinal number of a packet counted from the start of an analysis section is stored.
[000124] If no ES information is usable and the video frame type is not dynamically changed, it is also possible to acquire the start position of an I frame based on a RAI or ESPI, and determine the frames as "P frames". and "B-frames" in order to use the start position of an I-frame as an origin in a standard GoP structure (eg, M = 3 and N = 15).
[000125] The bit quantities of "I frame", "P frame" and "B frame" generally have a ratio of (BitsI) > (BitsP) > (BitsB). In this way, the type of video frame can be determined in order from a frame that has a large amount of bits.
[000126] The video frame bit quantity calculating unit 105 counts the TS packets having the video data PID between frame bounding positions extracted by the frame bounding position extracting unit 103. The quantity calculating unit video frame bits 105 multiplies the count by the data length (188 bytes in general) of the TS packet, which derives the amount of bits in each video frame. Also, the video frame bit quantity calculating unit 105 stores, in correspondence with the bit quantity of each video frame, a frame delimiting position (information such as an RTP sequence number at the starting position of a frame, the ordinal number of a packet counted from the beginning of an analysis section or the ordinal number of the frame of a packet containing PUSI) extracted by the frame delimiting position extracting unit 103.
[000127] Video frame type bit quantity calculation unit 106 derives I frame bit quantity (BitsI), P frame bit quantity (BitsP) and B frame bit quantity (BitsB) to from the frame delimiting position information, the bit quantities of the respective video frames that were calculated by the video frame bit quantity calculation unit 105, and the positions of the respective video frames that were specified by the extraction unit of specific frame start position 104.
[000128] The arrangement of video frame types of a video encoded in the GoP structure changes depending on the encoding situation. For example, an encoded video can be made up of just I-frames or I- and P-frames. For this reason, the video frame type bit quantity calculation unit 106 derives the bit quantity of each video frame by minus one type of video frame.
[000129] As shown in Figure 3, the frame characteristic estimating unit 11 of the video quality estimating apparatus 1, according to the modality, includes an average bit quantity estimating unit 11-1 that receives a bit rate emitted from the packet analysis unit 10 and derives the average bit quantities (Bits(I,P,B)ave) that serve as characteristics (frame characteristics) that refer to the bit quantities of the respective types of video frame, a maximum bit quantity estimation unit 11-2 that derives the maximum bit quantities (Bits(I,P,B)max) of the respective video frame types, and a quantity estimation unit 11-3 minimum bits that derives the minimum bit amounts (Bits(I,P,B)min) of the respective video frame types.
[000130] Note that the arrangement of video frame types contained in a video to be estimated depends on the encoding situation. A video can be made up of just I-frames, I- and P-frames, or all frame types of I, P, and B-frames. The layout changes depending on the video encoding situation. In this way, the frame characteristic estimating unit 11 derives the frame characteristics of at least one type of video frame, i.e. the frame characteristics of some or all types of video frame.
[000131] For example, deriving the frame characteristics from I frames, that is, the average bit quantity of the I frame (BitsIave), maximum bit quantity of the I frame (BitsImax) and the minimum bit quantity of the I frame (BitsImin), average bit quantity estimating unit 11-1, maximum bit quantity estimating unit 11-2, and minimum bit quantity estimating unit 11-3 receive a bit rate derived by the calculation unit of coding quantity 102, and derive the frame characteristics by estimating characteristics based on the relationship between the bit rate and the bit quantity of the I frame, as shown in Figure 10A.
[000132] Also, deriving frame characteristics of P and B frames, average bit quantity estimation unit 11-1, maximum bit quantity estimation unit 11-2 and bit quantity estimation unit minimum 11-3 receive a bit rate derived by the encoding quantity calculation unit 102, and derive the frame characteristics by estimating characteristics based on the relationship between the bit rate and the bit quantity of the P frame as shown. in Figure 10B (similar to the relationship between the bit rate and the amount of bits in frame B).
[000133] As shown in Figure 4, the encoding quality estimating unit 12 of the video quality estimating apparatus 1, according to the modality, includes an average video quality estimating unit 121, maximum video quality 122, minimum video quality estimator unit 123, difference video quality estimator unit 124, and video quality estimator unit 125.
[000134] The medium video quality estimator unit 121, maximum video quality estimator unit 122 and minimum video quality estimator unit 123 receive a bit rate emitted from the packet analysis unit 10, and derive an average video quality value (Vqave), maximum video quality value (Vqmax), and minimum video quality value (Vqmin) using characteristics based on the relationship between bit rate and quality value of video (Vq), as shown in Figure 11.
[000135] Difference video quality estimation unit 124 receives the average video quality value (Vqave), maximum video quality value (Vqmax), and minimum video quality value (Vqmin) that were derived by the average video quality estimating unit 121, maximum video quality estimating unit 122 and minimum video quality estimating unit 123, the bit amounts (BitsI, BitsP, and BitsB) of the respective frame types that were derived by bit quantity calculation unit 10-2, and frame characteristics (BitsIave, BitsImax, BitsImin, BitsPave, BitsPmax, BitsPmin, BitsBave, BitsBmax and BitsBmin) derived by frame characteristic estimation unit 11. The unit of difference video quality estimation 124 then derives a difference video quality value (dVq) that serves as the difference between the video quality value of a video that is subjected to video quality estimation and the v. medium video quality alue.
[000136] A method to derive the difference video quality value (dVq) will be explained.
[000137] For example, when a value indicated by a black star shown in Figure 11 is the video quality value (Vq) of a video to be estimated, the amount of frame bits of I, P, and B frames to be Estimates can be derived from characteristics based on the relationships between the amount of frame bits and the video quality value at the same bit rate, as shown in Figures 9A to 9C. The differences from the average bit quantities of the respective frames can be derived from characteristics in the relationships between the bit rate and the frame bit quantity, as shown in Figures 10A and 10B (black stars in Figures 10A and 10B). The difference video quality value is calculated using these characteristics.
[000138] More specifically, if the amount of bits in the I frame (BitsI) equals the amount of bits in the average I frame (BitsIave), the amount of bits in the P frame (BitsP) equals the average amount of bits in the frame P (bitsPave), and the amount of bits of the B frame (BitsB) equals the amount of bits of the average B frame (BitsBave), the video quality value (Vq) of a video to be estimated equals the value of average video quality (Vqave) and then no difference video quality value is generated.
[000139] If the amount of bits of I frame (BitsI) is greater than the average amount of bits of I frame (BitsIave), the video quality value (Vq) to be estimated becomes greater than the quality value of medium video (Vqave), according to the characteristic shown in Figure 9A. Conversely, if the amount of bits of the I frame (BitsI) is less than the average amount of bits of the I frame (BitsIave), the video quality value (Vq) to be estimated becomes less than the quality value of medium video (Vqave).
[000140] Therefore, when the amount of bits of I frame (BitsI) is greater than the average amount of bits of I frame (BitsIave), the difference video quality value (dVq) becomes proportional to (Vqmax - Vqave ) □ (BitsI - BitsIave)/(BitsImax - Bitslave). When the bit amount of I frame (BitsI) is less than the average bit amount of I frame (BitsIave), the difference video quality value (dVq) becomes proportional to (Vqmin - Vqave)□ (BitsI - BitsIave)/(BitsImin - BitsIave).
[000141] If the amount of frame bits (BitsP or BitsB) of a P or B frame is greater than the amount of bits of the average P or B frame (BitsPave or BitsBave), the video quality value (Vq) to be estimated becomes less than the average video quality value (Vqave) according to the characteristic shown in Figure 9B or 9C. If the amount of bits of the P or B frame (BitsP or BitsB) is less than the amount of bits of the average P or B frame (BitsPave or BitsBave), the video quality value (Vq) to be estimated becomes greater than the average video quality value (Vqave).
[000142] When a P frame is sampled, if the bit quantity of the P frame (BitsP) is greater than the average bit quantity of the P frame (BitsPave), the difference video quality value (dVq) becomes proportional a (Vqmin - Vqave) □ (BitsP - BitsPave)/(BitsPmin - BitsPave). If the bit amount of P frame (BitsP) is less than the average bit amount of P frame (BitsPave), the difference video quality value (dVq) becomes proportional to (Vqmax - Vqave)α(BitsP - BitsPave)/(BitsPmax - BitsPave).
[000143] Note that the difference video quality value (dVq) characteristic of a B frame is the same as the above described characteristic of a P frame, and a description of this will not be repeated.
[000144] Using the characteristics of the respective video frame types that refers to the difference video quality value (dVq), the difference video quality estimating unit 124 receives the bit rate, the amounts of bits of the respective video frame types and the frame characteristics of the respective video frame types, and derive the difference video quality value (dVq).
[000145] The video quality estimation unit 125 derives the video quality value (Vq) of a video to be estimated by adding the average video quality value (Vqave) derived by the video quality estimation unit average 121 and the difference video quality value (dVq) derived by the difference video quality estimation unit 124.
[000146] Note that the video quality estimating apparatus 1, according to the modality, is implemented by installing computer programs on a computer that includes a CPU (Central Processing Unit), memory and interface. Various functions of the video quality estimator 1 are implemented through the cooperation between various computer hardware resources and computer programs (software).
[000147] The operation of the video quality estimator 1, according to the modality, will be explained with reference to Figure 5.
[000148] As shown in Figure 5, the packet analysis unit 10 of the video quality estimating apparatus 1 captures an input packet (S101).
[000149] The packet analysis unit 10 derives the bit rate (BR) of an encoded video packet and the bit amounts (BitsI, BitsP, and BitsB) of the respective video frame types from the captured packet ( S102).
[000150] The average bit quantity estimating unit 11-1 of the frame characteristic estimating unit 11 receives the bit rate derived by the packet analysis unit 10, derives an average bit quantity of the I frame (S103) , and outputs the same to difference video quality estimator unit 124.
[000151] The average bit quantity estimation unit 11-1 can derive an average bit quantity of I frame using equation (1) which represents a characteristic in which the average bit quantity of I frame increases as the bitrate increases: (Bitslave) = v1 + v2exp(-BR/v3) ...(1) where (BitsIave) is the average amount of bits of frame I, BR is the bitrate, and v1, v2 e v3 are the characteristic coefficients.
[000152] After the average bit quantity estimation unit 11-1 derives the average bit quantity of the I frame, the maximum bit quantity estimation unit 11-2 receives the bit rate derived by the packet analysis unit 10, derives a maximum bit amount of the I frame (S104), and outputs the same to difference video quality estimating unit 124.
[000153] The maximum bit quantity estimation unit 11-2 can derive a maximum bit quantity of I frame using equation (2) which represents a characteristic in which the maximum bit quantity of I frame increases as the bitrate increases: (Bitslmax) = v4 + v5αexp(-BR/vβ) ...(2) where (BitsImax) is the maximum bit rate of frame I, BR is the bitrate, and v4, v5 e v6 are the characteristic coefficients.
[000154] After the maximum bit quantity estimation unit 11-2 derives the maximum bit quantity of the I frame, the minimum bit quantity estimation unit 11-3 receives the bit rate derived by the packet analysis unit 10, derives a minimum amount of bits from the I frame (S105), and outputs the same to difference video quality estimating unit 124.
[000155] Minimum bit quantity estimation unit 11-3 can derive a minimum bit quantity of I frame using equation (3) which represents a characteristic in which the minimum bit quantity of I frame increases as the bitrate increases: (BitsImin) = v7 + v8-exp(-BR/v9) ...(3) where (BitsImin) is the minimum amount of bits in frame I, BR is the bitrate, and v7, v8, and v9 are the characteristic coefficients.
[000156] After deriving the frame characteristics of I-frames, the average bit quantity estimating unit 11-1, maximum bit quantity estimating unit 11-2, and minimum bit quantity estimating unit 11-3 of frame characteristic estimating unit 11 are derived as frame characteristics of P and B frames (S106 to S111), and output the same to difference video quality estimating unit 124.
[000157] Average bit quantity estimating unit 11-1, maximum bit quantity estimating unit 11-2, and minimum bit quantity estimating unit 11-3 can derive the frame characteristics of P and frames. B using equations (4) to (9) each representing a ratio in which the amount of bits of each frame characteristic increases as the bit rate increases: (BitsPave) = v10 + v11 □ (BR) . ..(4) (BitsPmax) = v12 + v13 □ (BR) ...(5) (BitsPmin) = v14 + v15 □ (BR) ...(6) (BitsBave) = v16 + v17 □ (BR) ...(7) (BitsBmax) = v18 + v19 □ (BR) ...(8) (BitsBmin) = v20 + v21 □ (BR) ...(9) where (BitsPave) is the average amount of bits of the P frame, (BitsPmax) is the maximum bit quantity of the P frame, (BitsPmin) is the minimum bit quantity of the P frame, (BitsBave) is the average B frame bit quantity, (BitsBmax) is the quantity of Maximum B frame bits, (BitsBmin) is the minimum B frame bits, (BR) is the bit rate, and v10 to v21 are the characteristic coefficients.
[000158] After the frame characteristic estimating unit 11 derives the frame characteristics of the respective video frame types, the average video quality estimating unit 121 of the encoding quality estimating unit 12 receives the bit rate derived by the packet analysis unit 10, derives an average video quality value (S112), and outputs the same for both the difference video quality estimating unit 124 and the video quality estimating unit 125.
[000159] The average video quality estimation unit 121 can derive an average video quality value using equation (10) which represents a characteristic in which the average video quality value increases as the bit rate increases : (Vqave) = v22 + v23-exp(-BR/v24) ...(10) where (Vqave) is the average video quality value, BR is the bitrate, and v22, v23, and v24 are the characteristic coefficients.
[000160] After the average video quality estimating unit 121 derives the average video quality value, the maximum video quality estimating unit 122 receives the bit rate derived by the packet analysis unit 10, derives a value of maximum video quality (S113), and outputs the same for difference video quality estimator unit 124.
[000161] The maximum video quality estimation unit 122 can derive a maximum video quality value using equation (11) which represents a characteristic in which the maximum video quality value increases as the bit rate increases : (Vqmax) = v25 + v26-exp(-BR/v27) ...(11) where (Vqmax) is the maximum video quality value, BR is the bitrate, and v25, v26, and v27 are the characteristic coefficients.
[000162] After the maximum video quality estimator unit 122 derives the maximum video quality value, the minimum video quality estimator unit 123 receives the bit rate derived by the packet analysis unit 10, derives a value of minimum video quality (S114), and outputs the same for difference video quality estimator 124.
[000163] Minimum video quality estimation unit 123 can derive a minimum video quality value using equation (12) which represents a characteristic in which the minimum video quality value increases as the bit rate increases : (Vqmin) = v28 + v29-exp(-BR/v30) ...(12) where (Vqmin) is the minimum video quality value, BR is the bitrate, and v28, v29, and v30 are the characteristic coefficients.
[000164] After the minimum video quality estimation unit 123 derives the minimum video quality value, the difference video quality estimation unit 124 receives the average video quality value (Vqave) derived by the estimation unit of average video quality 121, the maximum video quality value (Vqmax) derived by the maximum video quality estimating unit 122, the minimum video quality value (Vqmin) derived by the minimum video quality estimating unit 123 , the frame characteristics (BitsIave, BitsImax, BitsImin, BitsPave, BitsPmax, BitsPmin, BitsBave, BitsBmax, and BitsBmin) of the respective video frame types that were derived by the frame characteristic estimation unit 11, and the bit quantities (BitsI, BitsP, and BitsB) of the respective video frame types that were derived by the bit quantity calculation unit of the packet analysis unit 10. So, the quality estimation unit difference video 124 derives a difference video quality value (S115), and outputs the same to video quality estimator unit 125.
[000165] Difference video quality estimator unit 124 can derive a difference video quality value (dVq) using equation (13) representing the characteristic of difference video quality value (dVq) based on relationship between frame bit amounts and average bit amounts of the respective video frame types.
[000166] Note that the dVq difference video quality value is not generated when the frame bit quantities and average bit quantities of the respective video frame types are equal to each other. (dVq) = v31 + v32 □ X + v33 □ Y + v34 □ Z...(13) where (dVq) is the difference video quality value, X is the degree of influence of the amount of bits of the I frame on the difference video quality value, Y is the degree of influence of the P frame bit amount on the difference video quality value, Z is the degree of influence of the B frame bit amount on the video quality value of difference, and v31, v32, v33, and v34 are the characteristic coefficients.
[000167] X, Y, and Z in equation (13) can be derived using equations (14) to (19) each representing the relationship between the amount of frame bits and the average amount of bits of each type of frame. video frame. [For BitsI > BitsIave] X = (Vqmax - Vqave)(BitsI - BitsIave)/(BitsImax - BitsIave) ...(14) [For BitsI < BitsIave] X = (Vqmin - Vqave)(BitsI - BitsIave)/( BitsImin - BitsIave) ...(15) [For BitsP < BitsPave] Y = (Vqmax - Vqave)(BitsP - BitsPave)/(BitsPmax - BitsPave) ...(16) [For BitsP > BitsPave] Y = (Vqmax - Vqave)(BitsP - BitsPave)/(BitsPmax - BitsPave) ...(17) [For BitsB < BitsBave] Z = (Vqmax - Vqave)(BitsB - BitsBave)/(BitsBmax - BitsBave) ...(18) [For BitsB > BitsBave] Z = (Vqmin - Vqave)(BitsB - BitsBave)/(BitsBmin - BitsBave) ...(19)
[000168] As the characteristic coefficients v1 to v34 used in equations (1) to (13), the relevant characteristic coefficients are selected from a quality characteristic coefficient database in a unit of storage (not shown ) displayed in the video quality estimator 1. Figure 6 exemplifies the quality characteristic coefficient database. The quality characteristic database describes characteristic coefficients in association with prerequisites.
[000169] The video quality depends on the implementation of a video CODEC. For example, the quality differs between an H.264 encoded video and an MPEG2 encoded video even at the same bit rate. Also, video quality depends on prerequisites including video format and frame rate. In an example of the quality characteristic coefficient database shown in Figure 6, the characteristic coefficient is described for each prerequisite.
[000170] After the difference video quality estimator unit 124 derives the difference video quality value (dVq), the video quality estimator unit 125 receives the average video quality value (Vqave) derived by average video quality estimation unit 121 and difference video quality value (dVq) derived by difference video quality estimation unit 124, and add the same (equation (20)), deriving the quality value of video (Vq) of the video content that is subjected to video quality estimation (S116): (Vq) = (Vqave) + (dVq) ...(20)
[000171] In this way, according to the first modality, a video quality value, considering the encoding degradation, can be calculated using the bit rate of an encoded video packet from a packet and the bit quantities of the respective video frame types. Video quality can be estimated by an objective quality estimation method that is more accurate than the conventional one.
[000172] The video communication service provider can easily determine whether a service being provided maintains a predetermined or higher quality for the user, and can capture and manage in real time the actual quality of the service being provided. Second mode
[000173] A video quality estimator, according to the second embodiment of the present invention, implements objective video quality evaluation by deriving a video quality evaluation value that quantitatively represents a video quality using the bit rate, a lost video frame, and the bit amounts of the respective video frame types that affect the video quality evaluation value that refers to video communication.
[000174] For example, in the modality, the video quality estimating apparatus analyzes an encoded video packet contained in a video communication packet, such as an IPTV service, video distribution service, or provided videophone service over an IP network, such as the Internet, and derives a video quality assessment value that quantitatively represents the amount of resource that affects the video quality referred to by these video communication services.
[000175] The video quality estimator, according to the modality, estimates the video quality of an encoded video by deriving a video quality evaluation value in consideration of the bit rate of an encoded video from an input packet, the bit amounts of the respective video frame types in the video frame of the encoded video, and the number of video frames lost.
[000176] The relationship between the amounts of bits assigned to the respective video frame types of an encoded video and the video quality assessment value, and the influence of a packet loss generated in a network on the quality assessment value video of a video will be explained. <Relation between Bit Amounts of Respective Video Frame Types and Video Quality Evaluation Value>
[000177] As described above, the relationship between the bit amounts of the respective video frame types and the video quality evaluation value shown in Figures 9A to 9C represents that the bit amounts of the respective video frame types affect video quality even on videos that have the same bit rate. <Relation between Bit Quantity Characteristics of Respective Video Frame Types and Video Quality Rating Value>
[000178] Figures 25A to 25C are graphs showing the relationship between the bit rate and the amount of frame bits with respect to a set of encoded video contents (referred to as a "video set") for the respective types of video frame. Figure 26 shows the relationship between bit rate and video quality evaluation value.
[000179] As shown in Figures 25A to 25C, the amount of bits of I frame, amount of bits of P frame, and amount of bits of B frame have different characteristics of bit amount depending on the video content independent of the same rate. bits.
[000180] This means that different video contents encoded in the same amount of bits have different amounts of bits of the respective video frame types according to their video contents.
[000181] As shown in Figure 26, the encoded video quality rating value has maximum and minimum encoded video rating values even on videos that have the same bit rate. That is, the encoded video quality assessment value has a difference between the maximum encoded video quality assessment value and the minimum encoded video quality assessment value even in video contents encoded at the same bit rate.
[000182] For example, when a value indicated by a black star in Figure 26 is the encoded video quality assessment value of a video content to be estimated, the encoded video quality assessment value of the video content a be estimated does not always match the average encoded video quality rating value of a video that has the same bit rate as the video content being estimated. The difference between the encoded video quality assessment value and the average encoded video quality assessment value depends on the amounts of bits assigned to the respective video frame types of the video content to be estimated.
[000183] That is, the frame bit quantity characteristics of the respective video frame types of a video content is dependent on a video quality evaluation value encoded in a video content. This appears as the difference in the encoded video quality rating value even though the videos are encoded at the same bit rate as shown in Figure 26.
[000184] The quality assessment values (encoded video quality assessment values: Vqc) of videos encoded at the same bit rate outside the video contents in a video set will be explained.
[000185] When arbitrary video contents in a video set are encoded at the same bit rate, a maximum value out of the encoded video quality judgment (Vqc) values of the encoded video is set to a quality judgment value of maximum encoded video (Vcmax), a minimum value is defined as a minimum encoded video quality judgment value (Vcmin), and an average value is defined as an average encoded video quality judgment value (Vcave).
[000186] For example, when the Vqc encoded video quality evaluation values of eight video contents encoded at a bit rate of 10-Mbps (BR) is "3.5", "3.6", "3 .7", "3.8", "3.9", "4.0", "4.1", and "4.2", the maximum value of the encoded video quality assessment value will be "4 .2", so that the maximum encoded video quality evaluation value Vcmax becomes "4.2". The minimum value is "3.5", so the minimum encoded video quality rating value Vcmin becomes "3.5". The average value is "3.85", so the average Vcave encoded video quality rating value becomes "3.85". <Relationship between Packet Loss and Video Quality Rating Value>
[000187] The influence of a network-generated packet loss on the video quality rating value when transmitting a compressed video frame will be explained.
[000188] Figure 27 is a graph showing the number of video frames lost (DF) by a packet loss that is represented along the abscissa, and the video quality evaluation value (Vq) represented along the ordered when video contents are encoded at a bit rate of 10Mbps.
[000189] Figures 28A to 28C are graphs showing the relationship between the video quality evaluation value and the frame bit amounts (BitsI, BitsP, and BitsB) of the respective video frame types (I, P frames and B) when the bit rate (BR) is 10 Mbps and the number of video frames lost (DF) is 1.
[000190] As shown in Figure 27, when the video quality rating values are compared with the same number of video lost frames (DF), they include maximum and minimum video quality rating values. This means that the video quality evaluation value changes depending on the video content even in the same number of video frames lost.
[000191] As shown in Figures 28A to 28C, in the same number of lost video frames (DF), the relationship between the bit amounts of the respective video frame types and the video quality evaluation value represents that a content video that has a small amount of I-frame bits exhibits a low video quality assessment value, and video content that has a large amount of I-frame bits exhibits a high video quality assessment value. The result of comparing the same number of lost video frames (DF) for the bit amount of P and B frames reveals that video contents that have small amount of bits of P and B frames exhibit quality evaluation values of high video and video contents that have large amount of P and B frame bits exhibit low video quality evaluation values.
[000192] Even for videos that have the same number of video frames lost (DF), the bit amounts of the respective video frame types affect the video quality.
[000193] For example, when a value indicated by a black star in Figure 27 is the video quality assessment value of a video content to be estimated, the video quality assessment value of the video content to be estimated it does not always match the average video quality rating value of a video that has the same number of video frames lost as the video content being estimated. The difference between the video quality assessment value and the average video quality assessment value depends on the amounts of bits assigned to the respective video frame types of the video content to be estimated.
[000194] That is, the frame bit quantity characteristics of the respective video frame types of a video content are dependent on a video quality evaluation value in a video content. This appears as the difference in video quality rating value even though the videos have the same number of video frames lost as shown in Figure 27.
[000195] The video quality evaluation value (Vq) of a compressed video will be explained.
[000196] In arbitrary videos that are encoded at the same bitrate with the same number of video frames lost, a maximum value among the Vq video quality judgment values is set as the maximum video quality judgment value ( Vqmax), a minimum value is defined as the minimum video quality assessment value (Vqmin), and an average value is defined as the average video quality assessment value (Vqave).
[000197] For example, when the Vq video quality rating values of eight video contents that have a bit rate of 10-Mbps and a lost video frame is "3.5", "3.6", "3.7", "3.8", "3.9", "4.0", "4.1", and "4.2", the maximum value of the Vq video quality evaluation value will be "4.2", then the maximum video quality evaluation value (Vqmax) becomes "4.2". The minimum value is "3.5", so the minimum video quality rating value (Vqmin) becomes "3.5". The average value of the Vq video quality evaluation values of the eight video contents is "3.85", so the average video quality evaluation value (Vqave) becomes "3.85".
[000198] The arrangement and functions of a video quality estimating device 2, according to the second modality will be described. As shown in Figure 12, the video quality estimating apparatus 2, according to the embodiment, includes a packet analysis unit 20, frame characteristic estimating unit 21, encoding quality estimating unit 22, and packet loss quality estimation unit 23.
[000199] As shown in Figure 13, the packet analysis unit 20 includes a video packet specification unit 201, bit rate calculation unit 202, frame delimiting position extract unit 203, frame delimiting unit. specific frame start position 204, video frame bit quantity calculating unit 205, video frame type bit quantity calculating unit 206, packet loss frame specification unit 207, and calculation unit of number of lost video frames 208.
[000200] The video packet specification unit 201 specifies an arbitrary encoded video packet contained in an input packet based on a unique packet ID (PID) for the encoded video packet (specify the same using, for example , a payload type in an RTP (Real-Time Transport Protocol), PID in a TS (Transport Stream), or Stream ID in a PES (Packaged Elementary Stream) header). The video packet specification unit 201 can also even have a function of extracting an RTP sequence number in the RTP packet and CC (Continuity Counter: 4-bit counter) in the TS packet.
[000201] The bit rate calculation unit 202 calculates a bit rate represented by the amount of bits of the encoded video packet per time unit that is specified by the video packet specification unit 201.
[000202] For example, the encoded data of a video or audio is identified by the PID described in a TS packet header. The bit rate calculation unit 202 counts the TS packets which have the PID of video data, and multiplies the count by the data length (188 bytes, in general) of the TS packet to calculate a quantity of bits per unit of time. which thus derives the bit rate (BR).
[000203] The frame delimiting position extracting unit 203 extracts information indicating the delimiter of a video frame from an encoded video packet specified by the video packet specification unit 201.
[000204] For example, the packet contains information such as an IP header, UDP (User Datagram Protocol) header, RTP, TS header, PES header and ES (Elemental Stream). Of these types of information, Payload_Unit_Start_Indicator (referred to as "PUSI") in the TS header is an indicator that indicates the presence/absence of the PES header. When a PES contains a frame (a video frame is often stored in a PES in the video encoding used in a TV broadcast), PUSI serves as information that indicates the start start of a video frame.
[000205] Similarly, the packet contains information such as an IP header, UDP header (User Datagram Protocol), RTP, and ES (Elemental Stream). Of these types of information, Marker_bit (referred to as "MB") in the RTP header sometimes serves as information indicating the final position of a video frame. In this case, the final position of a frame is extracted according to the presence/absence of MB.
[000206] Information indicating the bounding position of a frame will be explained in detail with reference to Figure 7 which is a table that conceptually explains the extraction of a video frame start position.
[000207] As shown in Figure 7 (the left column represents the RTP sequence number and the second to eighth columns from the left represent the CC numbers of TS), the TS containing PUSI indicates the starting position of a frame. Information indicating the delimiting position of a frame is sufficient to store the RTP sequence number of the start position of a frame, the ordinal number of a packet counted from the start of an analysis section, or the ordinal number of a frame of a frame. package containing PUSI. Like the frame count method, PUSIs in the analysis section are counted.
[000208] When the PES header is usable, for example when the PES header is not encrypted, PTS (Presentation Time Marker) or DTS (Decoding Time Marker) serves as information indicating the delimiting position of a frame, and in this way, the same processing as that for the PUSI is performed. Similarly, when ES is usable, it stores frame information and then the same processing as for PUSI is performed. Also, when a frame is RTP-marshaled, the same processing as that for PUSI is performed referring to the MB.
[000209] The specific frame start position extracting unit 204 extracts information indicating the start position of a specific video frame from an encoded video packet specified by the video packet specification unit 201.
[000210] The specific frame start position extracting unit 204 in the video quality estimating apparatus, according to the modality, is based on the fact that it extracts information indicating the start positions of the "I frame", " P-frame" and "B-frame" when ES information is usable, and information indicating the starting position of the "I-frame" when no ES information is usable due to encryption.
[000211] When ES information is usable, a bit indicating frame information exists in an H.264 or MPEG2 bit string (for example, this bit is Primary_pic_type or Slice_type for H.264). The frame delimiters of "I frame", "P frame" and "B frame" can be identified by this information. Furthermore, the number of this video frame can be determined by storing the RTP sequence number of a packet containing video frame type identification information, or the ordinal number of a video frame counted from the beginning of a section. of analysis.
[000212] When no ES information is usable, the information indicating the start position of an I frame is RAI (Random_Access_Indicator) or ESPI (Elementary_Stream_Priority_Indicator) in the TS header which serves as an indicator that indicates the start position of an I or I frame IDR (Instant Decoder Update) frame (see non-patent literatures 4 and 5). Since RAI or ESPI serves as information indicating the starting position of an I-frame or IDR frame, the delimiter of an I-frame can be discriminated from those of other frames. Also, the number of this video frame can be determined by storing the RTP sequence number of a packet containing RAI or ESPI, or the ordinal number of a video frame counted from the beginning of an analysis section.
[000213] If neither RAI nor ESPI indicate the start position of an I frame or IDR frame, the specific frame start position extract unit 204 is incorporated in the video frame type bit quantity calculating unit 206 (a to be described later), and specifies the position of an I-frame by calculating the amount of data in each frame using PUSI representing the starting frame position extracted by the frame bounding position extracting unit 203, and the bit amounts of the respective video frames which are calculated by the video frame bit quantity calculating unit 205.
[000214] The amount of information of an I frame is greater than that of other video frames. Based on this basic feature of the compressed video frame type, a video frame that has a large amount of data outside the video frames in a packet can be specified as an I frame in conversion from the GoP length of an encoded video .
[000215] For example, when the number of video frames is 300 and the GoP length is 15, the number of I frames will be 20. So 20 video frames each having a large amount of data outside the video frames in a packet can be specified as I-frames.
[000216] Another method is a method (nearest neighbor method) of grouping frames that have a minimum distance which is the base of the grouping.
[000217] For example, a case where 12 video frames exist in one second and are aligned in the order of I, B, B, P, B, B, I, B, B, P, B, and B (which represents video frame types I, B and P) will be described. Assuming the bit amounts of the respective video frames in the 1-second video content are 100, 50, 51, 70, 48, 45, 95, 49, 52, 71, 47, and 46 bits, the video frames of 100 and 95 bits are identified as a maximum bit quantity group, that is, I-frames using the nearest neighbor method. The 50, 51, 48, 45, 49, 52, 47, and 46 bit video frames are identified as a minimum bit quantity group, that is, B frames. The remaining 70 and 71 bit video frames are identified as an intermediate group, ie P frames.
[000218] To indicate the position of a specified I frame, an RTP sequence number at the start position of a frame or the ordinal number of a packet counted from the start of an analysis section is stored.
[000219] If no ES information is usable and the video frame type is not dynamically changed, it is also possible to acquire the start position of an I frame by the method described above, and determine the frames as "P frames" and " B" frames in the order that uses the start position of an I frame as an origin in a standard GoP pattern (for example, M = 3 and N = 15).
[000220] The bit amounts of I, P, and B frames generally have a ratio of BitsI > BitsP > BitsB. In this way, the type of video frame can be determined in order from a frame that has a large amount of bits.
[000221] The video frame bit quantity calculating unit 205 counts the TS packets which have the PID of video data between frame bounding positions extracted by the frame bounding position extracting unit 203. The quantity calculating unit 205 video frame bits multiplies the count by the data length (188 bytes, in general) of the TS packet, which derives the amount of bits in each video frame. Also, the video frame bit quantity calculating unit 205 stores, in correspondence with the bit quantity of each video frame, a frame delimiting position (information such as an RTP sequence number at the starting position of a frame, the ordinal number of a packet counted from the beginning of an analysis section, or the ordinal number of the frame of a packet containing PUSI) extracted by the frame-bounding position extracting unit 203.
[000222] Video frame type bit quantity calculation unit 206 calculates I frame bit quantity (BitsI), P frame bit quantity (BitsP) and B frame bit quantity (BitsB ) from the bit quantities of the respective video frames that have been calculated by the video frame bit quantity calculation unit 205, the frame delimiting position information, and the positions of the I, P, and B frames that have been specified by the frame-specific home position extraction unit 204.
[000223] Note that the arrangement of video frame types contained in a video to be estimated changes depending on the encoding situation. For example, a video can be made up of just I-frames, I- and P-frames, or all video frame types of I, P, and B frames. video 206 derives the frame bit quantities of at least one type of video frame, that is, the frame bit quantities of some or all video frame types.
[000224] A method for specifying each video frame type by video frame type bit quantity calculation unit 206 is determined as follows.
[000225] When ES information is usable, a bit that indicates that the frame information exists in an H.264 or MPEG2 bit string (for example, this bit is Primary_pic_type or Slice_type for H.264). The frame delimiters of "I frame", "P frame" and "B frame" can be identified by this information. The frame delimiting position is stored in correspondence with the RTP sequence number of a packet or the ordinal number of a packet counted from the beginning of an analysis section.
[000226] When no ES information is usable, frames can be determined as "P frames" and "B frames" in order to use a specified I frame start position as an origin in a standard GoP structure (eg M = 3 and N = 15).
[000227] The bit amounts of I, P, and B frames generally have a ratio of BitsI > BitsP > BitsB. Therefore, the video frame type can be determined in order from a frame that has a large amount of bits.
[000228] Alternatively, the type of video frame can be specified using the magnitude relationship between the bit amounts of I, P and B frames, according to a method described in the non-patent 6 literature.
[000229] The packet loss frame specification unit 207 specifies a frame packet loss using information such as the RTP or CC sequence number in an IP packet containing PUSI or MB that indicates a frame bounding position extracted by frame bounding position extraction unit 203.
[000230] For example, when the first I-frames formed from packets that have RTP sequence numbers from 10,000 to 10,002, as shown in Figure 7, and a packet that has an RTP sequence number of 10,001 is lost, it may yourself specify that the first frame was lost. At this time, the ordinal number of the frame that has packet loss and counted from the beginning of an analysis section can be stored.
[000231] The number of video frames lost calculation unit 208 calculates the number of video frames lost from the video frame number and type of video frame in the analysis section that was output from the quantity calculation unit of video frame type bits 206, and the lost frame number in the parsing section that was output from the packet loss frame specification unit 207.
[000232] For example, when the first frame is formed from packets that have RTP sequence numbers from 10,000 to 10002, as shown in Figure 7, and a packet that has an RTP sequence number of 10,001 is lost, the packet loss frame specification unit 207 outputs information representing that the video frame position where packet loss occurred is in the first frame, and video frame type bit quantity calculation unit 206 outputs information representing that the video frame type of the first frame is an I frame. From this information, the lost video frame number calculation unit 208 can specify the ordinal number and video frame type of a video frame. lost video.
[000233] The relationship between the video frame type of an encoded video and the propagation degradation of an encoded video caused by a packet loss will be explained with reference to Figure 24.
[000234] As shown in Figure 24, the degradation propagation changes depending on the video frame type of a video frame in which a packet loss has occurred, due to the characteristics of the respective video frame types.
[000235] More specifically, when a packet loss occurs in an I frame, the B and P frames that follow the I frame that has the packet loss refer to the I frame, and the subsequent B and P frames refer additionally to the B and P frames that refer to the I frame. In this way, the degradation propagates until an interruption of the video frame reference frame. In the example shown in Figure 24, the number of degraded frames is 17, and the number of lost video frames derived by the lost video frame number calculation unit 208 is 17.
[000236] When a packet loss occurs in a P frame, the degradation propagates until an interruption of the video frame reference frame, as shown in Figure 24, similar to the case where a packet loss occurs in an I frame In the example shown in Figure 24, the number of degraded frames is 11, and the number of lost video frames derived by the lost video frame number calculation unit 208 is 11.
[000237] When a packet loss occurs in a B frame, the degradation does not propagate and only the B frame in which the packet loss occurred due to the fact that no video frame refers to the B frame, unlike the cases described above, where packet losses occur in I and P frames. Therefore, the number of lost video frames derived by the lost video frame number calculation unit 208 is 1.
[000238] As shown in Figure 14, the frame characteristic estimating unit 21 includes an I frame average bit quantity estimating unit 211 which estimates an average bit quantity of the I frame, a quantity estimating unit of I frame maximum I-frame bits 212 estimating a maximum I-frame bit quantity, an I-frame minimum bit quantity estimating unit 213 estimating a minimum I-frame bit quantity estimating unit, an average bit quantity estimating unit P-frame 214 estimating a P-frame average bit quantity, a P-frame maximum bit quantity estimating unit 215 estimating a P-frame maximum bit quantity estimating unit, a P-frame minimum bit quantity estimating unit P 216 estimating a P frame minimum bit quantity, a B frame average bit quantity estimating unit 217 estimating a B frame average bit quantity estimating a unit quantity estimation and a B-frame maximum bits 218 which estimates a maximum B-frame bit quantity and a B-frame minimum bit quantity estimation unit 219 which estimates a minimum B-frame bit quantity.
[000239] Note that the arrangement of video frame types contained in a video to be estimated changes depending on the encoding situation. A video can be made up of just I-frames, I- and P-frames, or all I, P, and B-frame video frame types. The layout changes depending on the video encoding situation. The frame characteristic estimation unit 21, therefore, derives the frame characteristics of at least one type of video frame, i.e. the frame characteristics of some or all types of video frame.
[000240] The I-frame average bit amount estimation unit 211 derives an I-frame average bit amount (BitsIave) based on a bit rate calculated by the bit rate calculation unit 202.
[000241] Note that the I-frame average bit quantity estimation unit 211 estimates an I-frame average bit quantity using a characteristic in which it increases as the bit rate increases, as shown in Figure 25A .
[000242] The maximum I-frame bit amount estimation unit 212 derives a maximum I-frame bit amount (BitsImax) based on a bit rate calculated by the bit rate calculation unit 202.
[000243] Note that the I-frame maximum bit amount estimation unit 212 estimates a maximum I-frame bit amount using a characteristic in which it increases as the bit rate increases, as shown in Figure 25A .
[000244] The minimum I frame bit quantity estimation unit 213 derives a minimum I frame bit quantity (BitsImin) based on a bit rate calculated by the bit rate calculation unit 202.
[000245] Note that the I-frame minimum bit amount estimation unit 213 estimates a minimum I-frame bit amount that uses a characteristic in which it increases as the bit rate increases, as shown in Figure 25A .
[000246] The P frame average bit quantity estimation unit 214 derives a P frame average bit quantity (BitsPave) based on a bit rate calculated by the bit rate calculation unit 202.
[000247] Note that the P frame average bit quantity estimation unit 214 estimates a P frame average bit quantity using a characteristic in which it increases as the bit rate increases, as shown in Figure 25B .
[000248] The P frame maximum bit quantity estimation unit 215 derives a P frame maximum bit quantity (BitsPmax) based on a bit rate calculated by the bit rate calculation unit 202.
[000249] Note that the P frame maximum bit quantity estimation unit 215 estimates a P frame maximum bit quantity which uses a characteristic in which it increases as the bit rate increases, as shown in Figure 25B .
[000250] The P frame minimum bit quantity estimation unit 216 derives a P frame minimum bit quantity (BitsPmin) based on a bit rate calculated by the bit rate calculation unit 202.
[000251] Note that the P frame minimum bit quantity estimation unit 216 estimates a P frame minimum bit quantity that uses a characteristic in which it increases as the bit rate increases, as shown in Figure 25B .
[000252] B-frame average bit quantity estimation unit 217 derives a B-frame average bit quantity (BitsBave) based on a bit rate calculated by bit rate calculation unit 202.
[000253] Note that the B-frame average bit quantity estimation unit 217 estimates a B-frame average bit quantity using a characteristic in which it increases as the bit rate increases, as shown in Figure 25C .
[000254] B-frame maximum bit amount estimation unit 218 derives a maximum B-frame bit amount (BitsBmax) based on a bit rate calculated by bit rate calculation unit 202.
[000255] Note that the B-frame maximum bit amount estimation unit 218 estimates a maximum B-frame bit amount that uses a characteristic in which it increases as the bit rate increases, as shown in Figure 25C .
[000256] B-frame minimum bit amount estimation unit 219 derives a minimum B-frame bit amount (BitsBmin) based on a bit rate calculated by bit rate calculation unit 202.
[000257] Note that the B-frame minimum bit amount estimation unit 219 estimates a minimum B-frame bit amount that uses a characteristic in which it increases as the bit rate increases, as shown in Figure 25C .
[000258] As shown in Figure 15, the encoding quality estimating unit 22 includes an average encoded video quality estimating unit 221 that derives an average encoded video quality estimation value, a quality estimating unit from maximum encoded video 222 deriving a maximum encoded video quality evaluation value, a minimum encoded video quality estimating unit 223 deriving a minimum encoded video quality evaluation value, an encoded video quality estimating unit of difference 224 which derives a difference encoded video quality assessment value representing the dependence of video quality on a content, and an encoded video quality estimation unit 225 which derives an encoded video quality assessment value which refers to the degradation of encoding.
[000259] Average encoded video quality estimation unit 221 derives an average encoded video quality evaluation value (Vqcave) based on a bit rate calculated by bit rate calculation unit 202.
[000260] Note that the average encoded video quality estimating unit 221 estimates an average encoded video quality evaluation value using a characteristic in which it increases as the bit rate increases, as shown in Figure 26 .
[000261] The maximum coded video quality estimation unit 222 derives a maximum coded video quality evaluation value (Vqcmax) based on a bit rate calculated by the bit rate calculation unit 202.
[000262] Note that the maximum encoded video quality estimating unit 222 estimates a maximum encoded video quality evaluation value using a characteristic in which it increases as the bit rate increases, as shown in Figure 26 .
[000263] Minimum coded video quality estimation unit 223 derives a minimum coded video quality evaluation value (Vqcmin) based on a bit rate calculated by bit rate calculation unit 202.
[000264] Note that the minimum encoded video quality estimation unit 223 estimates a minimum encoded video quality evaluation value that uses a characteristic in which it increases as the bit rate increases, as shown in Figure 26 .
[000265] The difference encoded video quality estimation unit 224 derives a difference encoded video quality assessment value (dVqc) from the average encoded video quality assessment value (Vqcave) calculated by the quality estimation unit of average encoded video 221, the maximum encoded video quality assessment value (Vqcmax) calculated by the maximum encoded video quality estimating unit 222, the minimum encoded video quality assessment value (Vqcmin) calculated by the estimating unit of minimum encoded video quality 223, I frame bit quantity (BitsI), P frame bit quantity (BitsP), and B frame bit quantity (BitsB) calculated by bit quantity calculation unit of type of video frame 206, and the average bit quantities (Bits(I,P,B)ave), maximum bit quantities (Bits(I,P,B)max), and minimum bit quantities (Bits(I, P,B)min) of the respective quad types video files that were derived by the frame feature estimation unit 21.
[000266] The derivation of the difference coded video quality evaluation value (dVqc) by the difference coded video quality estimation unit 224 will be described in detail.
[000267] As shown in Figures 9A to 9C, the result of the comparison at the same bit rate (10 Mbps in the example of Figures 9A to 9C) reveals that a video content that has a large amount of I frame bits exhibits a high video quality rating value, and a video content that has a small amount of I-frame bits exhibits a low video quality rating value. The result of the same bitrate comparison also represents that video contents that have large amounts of P and B frame bits exhibit low video quality evaluation values and video contents that have small amounts of P and B frame bits B display high video quality rating values.
[000268] When a value indicated by a black star in Figure 26 is the encoded video quality evaluation value (Vqc) of a video content to be estimated, the bit quantity of the I, P and B frames of the content of video to be estimated are the values indicated by the black stars in Figures 25A to 25C. To estimate a video quality evaluation value, it is sufficient to calculate, from the amount of bits of the I, P and B frames (BitsI, BitsP, and BitsB), the difference encoded video quality evaluation value (dVqc ) which represents a deviation from the average encoded video quality assessment value.
[000269] If the amount of bits of I frame (BitsI) equals the amount of average bits of I frame (BitsIave), the amount of bits of P frame (BitsP) equals the average amount of bits of P frame (bitsPave ), and the bit quantity of the B frame (BitsB) equals the bit quantity of the average B frame (BitsBave), the encoded video quality evaluation value (Vqc) of a video content to be estimated equals the Average encoded video quality rating value (Vqcave).
[000270] If the amount of bits of I frame (BitsI) is greater than the average amount of bits of I frame (BitsIave), the encoded video quality (Vqc) evaluation value of a video content to be estimated if makes it higher than the average encoded video quality rating value (Vqcave). Conversely, if the bit quantity of the I frame (BitsI) is less than the average bit quantity of the I frame (BitsIave), the encoded video quality evaluation value (Vqc) of a video content to be estimated becomes lower than the average encoded video quality rating value (Vqcave).
[000271] That is, when the amount of bits in the I frame (BitsI) is greater than the average amount of bits in the I frame (BitsIave), the difference encoded video quality evaluation value (dVqc) becomes proportional to (Vqcmax - Vqcave)D(BitsI - BitsIave)/(BitsImax - BitsIave). When the bit amount of I frame (BitsI) is less than the average bit amount of I frame (BitsIave), the difference encoded video quality evaluation value (dVqc) becomes proportional to (Vqcmin - Vqcave)D (BitsI - BitsIave)/(BitsImin - BitsIave).
[000272] If the bit quantity of the P frame (BitsP) is is greater than the average bit quantity of the P frame (BitsPave), the encoded video quality evaluation value (Vqc) of a video content to be estimated becomes lower than the average encoded video quality rating value (Vqcave). If the bit quantity of the P frame (BitsP) is less than the average bit quantity of the P frame (BitsPave), the encoded video quality evaluation value (Vqc) of a video content to be estimated becomes higher than the average encoded video quality evaluation value (Vqcave).
[000273] That is, when the bit quantity of the P frame (BitsP) is greater than the average bit quantity of the P frame (BitsPave), the difference encoded video quality evaluation value (dVqc) becomes proportional to (Vqcmin - Vqcave)D(BitsP - BitsPave)/(BitsPmin - BitsPave). When the bit quantity of the P frame (BitsP) is less than the average bit quantity of the P frame (BitsPave), the difference encoded video quality evaluation value (dVqc) becomes proportional to (Vqcmax - Vqcave)D (BitsP - BitsPave)/(BitsPmax - BitsPave).
[000274] If the bit quantity of the B frame (BitsB) is greater than the bit quantity of the average B frame (BitsBave), the encoded video quality evaluation value (Vqc) of a video content to be estimated if makes it lower than the average encoded video quality rating value (Vqcave). If the bit quantity of the B frame (BitsB) is less than the bit quantity of the average B frame (BitsBave), the encoded video quality evaluation value (Vqc) of a video content to be estimated becomes higher than the average encoded video quality evaluation value (Vqcave).
[000275] That is, when the bit quantity of the B frame (BitsB) is greater than the bit quantity of the average B frame (BitsBave), the difference encoded video quality evaluation value (dVqc) becomes proportional to (Vqcmin - Vqcave)D(BitsB - BitsBave)/(BitsBmin - BitsBave). When the bit quantity of the B frame (BitsB) is less than the bit quantity of the average B frame (BitsBave), the difference encoded video quality evaluation value (dVqc) becomes proportional to (Vqcmax - Vqcave)Q (BitsB - BitsBave)/(BitsBmax - BitsBave).
[000276] Based on these characteristics between the bit amounts of the respective video frame types and the video quality evaluation value, the difference encoded video quality estimation unit 224 estimates the video quality evaluation value difference coded (dVqc).
[000277] The encoded video quality estimation unit 225 estimates the encoded video quality assessment value (Vqc) of a video content to be estimated by adding the average encoded video quality assessment value (Vqcave) calculated by the average encoded video quality estimating unit 221 and the difference encoded video quality estimation value (dVqc) calculated by the difference encoded video quality estimating unit 224.
[000278] As shown in Figure 16, the packet loss quality estimation unit 23 includes an average packet loss video quality estimation unit 231 that derives an average packet loss video quality assessment value , a 232 maximum packet loss video quality estimator unit that derives a maximum packet loss video quality estimator value, a 233 minimum packet loss video quality estimator unit that derives a value of minimum packet loss video quality rating, a difference packet loss video quality estimator unit 234 that derives a difference packet loss video quality rating value representing video quality dependency in a content, and a video quality estimation unit 235 that derives a video quality evaluation value that refers to the target encoded degradation and the loss of packet degradation. you.
[000279] Average packet loss video quality estimation unit 231 derives an average packet loss video quality judgment value (Vqave) from the encoded video quality judgment value (Vqc) calculated by encoding quality estimation unit 22 and number of lost video frames (DF) calculated by packet analysis unit 20.
[000280] Note that the average packet loss video quality estimation unit 231 estimates an average packet loss video quality assessment value using a characteristic in which it decreases as the number of video frames lost increases, as shown in Figure 27.
[000281] The maximum packet loss video quality estimation unit 232 derives a maximum packet loss video quality judgment value (Vqmax) from the encoded video quality judgment value (Vqc) calculated by encoding quality estimation unit 22 and number of lost video frames (DF) calculated by packet analysis unit 20.
[000282] Note that the maximum packet loss video quality estimation unit 232 estimates a maximum packet loss video quality assessment value using a characteristic in which it decreases as the number of video frames lost increases, as shown in Figure 27.
[000283] Minimum packet loss video quality estimation unit 233 derives a minimum packet loss video quality judgment value (Vqmin) from the encoded video quality judgment value (Vqc) calculated by encoding quality estimation unit 22 and number of lost video frames (DF) calculated by packet analysis unit 20.
[000284] Note that the minimum packet loss video quality estimation unit 233 estimates a minimum packet loss video quality assessment value using a characteristic in which it decreases as the number of video frames lost increases, as shown in Figure 27.
[000285] Difference packet loss video quality estimation unit 234 calculates a difference packet loss video quality assessment value (dVq) from the packet loss video quality assessment value average (Vqave) calculated by the average packet loss video quality estimation unit 231, the maximum packet loss video quality estimation value (Vqmax) calculated by the maximum packet loss video quality estimation unit 232, the minimum packet loss video quality evaluation value (Vqmin) calculated by the minimum packet loss video quality estimation unit 233, the frame bit quantities (Bits(I,P,B)) of the respective video frame types which were calculated by the video frame type bit quantity calculation unit 206, and the average bit quantities (Bits(I,P,B)ave), maximum bit quantities (Bits (I,P,B)max), and minimum bit quantities ( Bits(I,P,B)min) of the respective video frame types which were derived by the frame characteristic estimation unit 21.
[000286] The derivation of the difference packet loss (dVq) video quality evaluation value by the difference packet loss video quality estimation unit 234 will be explained in detail.
[000287] As shown in Figures 28A to 28C, the result of comparing with the same number of dropped frames (the number of dropped frames is 1 in the example of Figures 28A to 28C) reveals that a video content having a large amount of I-frame bits exhibits a high video quality judgment value, and video content that has a small amount of I-frame bits exhibits a low video quality judgment value. In contrast, video contents that have large amounts of B and P frame bits exhibit low video quality judgment values and video contents that have small amounts of B and P frame bits exhibit video quality judgment values tall.
[000288] When a value indicated by a black star in Figure 27 is the video quality evaluation value (Vq) of a video content to be estimated, the bit amounts of the I, P and B frames of the video content to be estimated are the values indicated by the black stars in Figures 25A to 25C. To estimate the video quality evaluation value of a video content to be estimated, it is sufficient to calculate, from the amount of bits of the I, P and B frames (BitsI, BitsP, and BitsB), the evaluation value of difference packet loss (dVq) video quality that represents a deviation from the average packet loss (Vqave) video quality assessment value.
[000289] If the number of bits of the I frame (BitsI) equals the average number of bits of the I frame (BitsIave), the number of bits of the P frame (BitsP) equals the average number of bits of the P frame (bitsPave ), and the amount of bits of the B frame (BitsB) equals the amount of bits of the average B frame (BitsBave), the video quality evaluation value (Vq) of a video content to be estimated equals the value of medium packet loss video quality evaluation (Vqave).
[000290] If the amount of bits of I frame (BitsI) is greater than the average amount of bits of I frame (BitsIave), the video quality evaluation value (Vq) of a video content to be estimated becomes higher than the average packet loss video quality rating value (Vqave). Conversely, if the amount of bits of the I frame (BitsI) is less than the average amount of bits of the I frame (BitsIave), the video quality evaluation value (Vq) of a video content to be estimated if makes it lower than the average packet loss (Vqave) video quality rating value.
[000291] That is, when the amount of bits of I frame (BitsI) is greater than the average amount of bits of I frame (BitsIave), the difference packet loss (dVq) video quality evaluation value if makes it proportional to (Vqmax - Vqave)D(BitsI - BitsIave)/(BitsImax - BitsIave). When the bit amount of I frame (BitsI) is less than the average bit amount of I frame (BitsIave), the difference packet loss (dVq) video quality evaluation value becomes proportional to (Vqmin - Vqave)Q(BitsI - BitsIave)/(BitsImin - BitsIave).
[000292] If the bit quantity of the P frame (BitsP) is greater than the average bit quantity of the P frame (BitsPave), the video quality evaluation value (Vq) of a video content to be estimated becomes lower than the average packet loss video quality rating value (Vqave). On the other hand, if the bit quantity of the P frame (BitsP) is less than the average bit quantity of the P frame (BitsPave), the video quality evaluation value (Vq) of a video content to be estimated becomes higher than the average packet loss video quality rating value (Vqave).
[000293] That is, when the amount of bits of the P frame (BitsP) is greater than the average amount of bits of the P frame (BitsPave), the difference packet loss (dVq) video quality evaluation value if makes it proportional to (Vqmin - Vqave) D(BitsP - BitsPave)/(BitsPmin - BitsPave). When the bit quantity of P frame (BitsP) is less than the average bit quantity of P frame (BitsPave), the video quality evaluation value of difference (dVq) becomes proportional to (Vqmax - Vqave) D( BitsP - BitsPave)/(BitsPmax - BitsPave).
[000294] If the bit quantity of the B frame (BitsB) is greater than the bit quantity of the average B frame (BitsBave), the video quality evaluation value (Vq) of a video content to be estimated becomes lower than the average packet loss video quality rating value (Vqave). Conversely, if the bit quantity of the B frame (BitsB) is less than the bit quantity of the average B frame (BitsBave), the video quality evaluation value (Vq) of a video content to be estimated if makes it higher than the average packet loss video quality rating value (Vqave).
[000295] That is, when the amount of bits of B frame (BitsB) is greater than the amount of bits of average B frame (BitsBave), the difference packet loss (dVq) video quality evaluation value if makes it proportional to (Vqmin - Vqave) D(BitsB - BitsBave)/(BitsBmin - BitsBave). When the bit quantity of the B frame (BitsB) is less than the bit quantity of the average B frame (BitsBave), the video quality evaluation value of difference packet loss (dVq) becomes proportional to (Vqmax - Vqave) Q(BitsB - BitsBave)/(BitsBmax - BitsBave).
[000296] Based on these characteristics between the bit amounts of the respective video frame types and the video quality evaluation value, the difference packet loss video quality estimation unit 234 estimates the evaluation value of difference packet loss (dVq) video quality.
[000297] The video quality estimation unit 235 estimates the video quality assessment value (Vq) of a video content to be estimated by adding the average packet loss video quality assessment value (Vqave) ) calculated by the average packet loss video quality estimation unit 231 and the difference packet loss (dVq) video quality estimation value calculated by the difference packet loss video quality estimating unit 234 .
[000298] Note that the video quality estimating apparatus 2, according to the modality, is implemented by installing computer programs on a computer that includes a CPU (Central Processing Unit), memory and interface. Various functions of the video quality estimator 2 are implemented through cooperation between various computer hardware resources and computer programs (software).
[000299] The operation of the video quality estimating apparatus, according to the modality, will be explained with reference to Figures 17 and 18.
[000300] As shown in Figure 17, the packet analysis unit 20 of the video quality estimating apparatus 2 captures an input packet (S201). The packet analysis unit 20 derives the bit rate (BR) of an encoded video packet, the bit amounts (BitsI, BitsP, and BitsB) of the respective video frame types, and the number of lost video frames. (DF) of the captured packet (S202).
[000301] The bit rate (BR) derived by packet analysis unit 20 is entered into I frame average bit quantity estimation unit 211, I frame maximum bit quantity estimation unit 212, estimation unit I-frame minimum bit amount 213, P frame average bit amount estimator unit 214, P frame maximum bit amount estimator unit 215, P frame minimum bit amount estimator unit 216, unit B frame average bit quantity estimation 217, B frame maximum bit bit quantity estimation unit 218, B frame minimum bit quantity estimation unit 219, average encoded video quality estimation unit 221, unit maximum encoded video quality estimation 222, and minimum encoded video quality estimating unit 223. The bit amounts (BitsI, BitsP, and BitsB) of the respective video frame types are entered into the unit. and 224 difference encoded video quality estimation and 234 difference packet loss video quality estimation unit. medium packet 231, maximum packet loss video quality estimator unit 232, and minimum packet loss video quality estimator unit 233.
[000302] The I-frame average bit quantity estimation unit 211 derives an I-frame average bit quantity (BitsIave) based on the bit rate (BR) derived by the packet analysis unit 20 (S203).
[000303] The average bit amount of the I frame (BitsIave) has a characteristic in which it increases as the bit rate (BR) increases, and can be derived using equation (21) representing this characteristic: BitsIave = u1 + u2exp(-BR/u3) ...(21) where BitsIave is the average amount of bits of the I frame, BR is the bit rate, and u1,..., u3 are the characteristic coefficients.
[000304] I-frame average bit quantity estimating unit 211 outputs the derived I frame average bit quantity (BitsIave) to difference encoded video quality estimating unit 224 and video quality estimating unit of difference packet loss 234.
[000305] The maximum I-frame bit amount estimating unit 212 derives a maximum I-frame bit amount (BitsImax) based on the bit rate (BR) derived by the packet analysis unit 20 (S204).
[000306] The maximum bit amount of the I frame (BitsImax) has a characteristic in which it increases as the bit rate (BR) increases, and can be derived using equation (22) representing this characteristic: BitsImax = u4 + u5exp(-BR/u6) ...(22) where BitsImax is the maximum amount of bits in the I frame, BR is the bitrate, and u4,..., u6 are the characteristic coefficients.
[000307] I-frame maximum bit amount estimating unit 212 outputs the derived I-frame maximum bit amount (BitsImax) to difference encoded video quality estimating unit 224 and video quality estimating unit of difference packet loss 234.
[000308] The minimum I frame bit quantity estimation unit 213 derives a minimum I frame bit quantity (BitsImin) based on the bit rate (BR) derived by the packet analysis unit 20 (S205).
[000309] The minimum bit amount of the I frame (BitsImin) has a characteristic in which it increases as the bit rate (BR) increases, and can be derived using equation (23) representing this characteristic: BitsImin = u7 + u8exp(-BR/u9) ...(23) where BitsImin is the minimum amount of bits in the I frame, BR is the bit rate, and u7,..., u9 are the characteristic coefficients.
[000310] I-frame minimum bit amount estimation unit 213 outputs the derived I-frame minimum bit amount (BitsImin) to difference encoded video quality estimation unit 224 and video quality estimation unit of difference packet loss 234.
[000311] The P frame average bit quantity estimation unit 214 derives a P frame average bit quantity (BitsPave) based on the bit rate (BR) derived by the packet analysis unit 20 (S206).
[000312] The average amount of bits of the P frame (BitsPave) has a characteristic in which it increases as the bit rate (BR) increases, and can be derived using equation (24) that represents this characteristic: BitsPave = u10 + u11«BR ...(24) where BitsPave is the average amount of bits in the P frame, BR is the bit rate, and u10 and u11 are the characteristic coefficients.
[000313] P frame average bit quantity estimating unit 214 outputs the derived P frame average bit quantity (BitsPave) to difference encoded video quality estimating unit 224 and video quality estimating unit of difference packet loss 234.
[000314] The P frame maximum bit quantity estimation unit 215 derives a P frame maximum bit quantity (BitsPmax) based on the bit rate (BR) derived by the packet analysis unit 20 (S207).
[000315] The maximum bit amount of the P frame (BitsPmax) has a characteristic in which it increases as the bit rate (BR) increases, and can be derived using equation (25) representing this characteristic: BitsPmax = u12 + u13-BR ...(25) where BitsPmax is the maximum bit amount of the P frame, BR is the bit rate, and u12 and u13 are the characteristic coefficients.
[000316] P frame maximum bit amount estimator unit 215 outputs the derived P frame maximum bit amount (BitsPmax) to difference encoded video quality estimator unit 224 and video quality estimator unit of difference packet loss 234.
[000317] The P frame minimum bit quantity estimation unit 216 derives a P frame minimum bit quantity (BitsPmin) based on the bit rate (BR) derived by the packet analysis unit 20 (S208).
[000318] The minimum bit amount of the P frame (BitsPmin) has a characteristic in which it increases as the bit rate (BR) increases, and can be derived using equation (26) representing this characteristic: BitsPmin = u14 + u15-BR ...(26) where BitsPmin is the minimum bit quantity of the P frame, BR is the bit rate, and u14 and u15 are the characteristic coefficients.
[000319] P frame minimum bit amount estimation unit 216 outputs a derived P frame minimum bit amount (BitsPmin) to difference encoded video quality estimation unit 224 and video quality estimation unit of difference packet loss 234.
[000320] B-frame average bit quantity estimation unit 217 derives a B-frame average bit quantity (BitsBave) based on the bit rate (BR) derived by packet analysis unit 20 (S209).
[000321] The average B frame bit quantity (BitsBave) has a characteristic in which it increases as the bit rate (BR) increases, and can be derived using equation (27) representing this characteristic: BitsBave = u16 + u17-BR ...(27) where BitsBave is the average B-frame bit amount, BR is the bit rate, and u16 and u17 are the characteristic coefficients.
[000322] B-frame average bit quantity estimating unit 217 outputs the derived average B frame bit quantity (BitsBave) to difference encoded video quality estimating unit 224 and video quality estimating unit of difference packet loss 234.
[000323] B-frame maximum bit amount estimation unit 218 derives a maximum B-frame bit amount (BitsBmax) based on the bit rate (BR) derived by packet analysis unit 20 (S210).
[000324] The maximum B frame bit amount (BitsBmax) has a characteristic in which it increases as the bit rate (BR) increases, and can be derived using equation (28) representing this characteristic: BitsBmax = u18 + u19-BR ...(28) where BitsBmax is the maximum B frame bit amount, BR is the bit rate, and u18 and u19 are the characteristic coefficients.
[000325] B-frame maximum bit amount estimating unit 218 outputs a derived maximum B-frame bit amount (BitsBmax) to difference encoded video quality estimating unit 224 and video quality estimating unit of difference packet loss 234.
[000326] B-frame minimum bit quantity estimation unit 219 derives a minimum B frame bit quantity (BitsBmin) based on the bit rate (BR) derived by packet analysis unit 20 (S211).
[000327] The minimum B frame bit quantity (BitsBmin) has a characteristic in which it increases as the bit rate (BR) increases, and can be derived using equation (29) representing this characteristic: BitsBmin = u20 + u21 -BR ...(29) where BitsBmin is the minimum B frame bits, BR is the bitrate, and u20 and u21 are the characteristic coefficients.
[000328] B-frame minimum bit amount estimating unit 219 outputs the derived minimum B-frame bit amount (BitsBmin) to difference encoded video quality estimating unit 224 and video quality estimating unit of difference packet loss 234.
[000329] The average encoded video quality estimation unit 221 derives an average encoded video quality evaluation value (Vqcave) based on the bit rate (BR) derived by the packet analysis unit 20 (S212).
[000330] The average encoded video quality evaluation value (Vqcave) has a characteristic that it increases as the bit rate (BR) increases, and can be derived using equation (30) or (31) that represents this characteristic: Vqcave = u22 + u23exp(-BR/u24) ...(30) or Vqcave = 1 + u22 - u22/(1 + (BR/u23)u24) ...(31) where Vqcave is the average encoded video quality rating value, BR is bit rate, and u22,..., u24 are characteristic coefficients.
[000331] The average encoded video quality estimating unit 221 derives the average encoded video quality evaluation value (Vqcave) for the encoded video quality estimating unit 225.
[000332] The maximum coded video quality estimation unit 222 derives a maximum coded video quality evaluation value (Vqcmax) based on the bit rate (BR) derived by the packet analysis unit 20 (S213).
[000333] The maximum encoded video quality evaluation value (Vqcmax) has a characteristic that it increases as the bit rate (BR) increases, and can be derived using equation (32) or (33) which represents this characteristic: Vqcmax = u25 + u26exp(-BR/u27) ...(32) or Vqcmax = 1 + u25 - u25/(1 + (BR/u26)u27) ...(33) where Vqcmax is the maximum encoded video quality rating value, BR is bit rate, and u25,..., u27 are characteristic coefficients.
[000334] The maximum coded video quality estimation unit 222 derives the maximum coded video quality evaluation value (Vqcmax) for the coded video quality estimation unit 225.
[000335] Minimum coded video quality estimation unit 223 derives a minimum coded video quality evaluation value (Vqcmin) based on the bit rate (BR) derived by packet analysis unit 20 (S214).
[000336] The minimum encoded video quality evaluation value (Vqcmin) has a characteristic that it increases as the bit rate (BR) increases, and can be derived using equation (34) or (35) that represents this characteristic: Vqcmin = u28 + u29exp(-BR/u30) ...(34) or Vqcmin = 1 + u28 - u28/(1 + (BR/u29)u30) ...(35) where Vqcmin is the minimum encoded video quality rating value, BR is bit rate, and u28,..., u30 are characteristic coefficients.
[000337] Minimum coded video quality estimation unit 223 outputs the derived minimum coded video quality assessment value (Vqcmin) to coded video quality estimation unit 225.
[000338] The difference encoded video quality estimation unit 224 derives a difference encoded video quality assessment value (dVqc) based on the average encoded video quality assessment value (Vqcave) calculated by the estimation unit of average encoded video quality 221, at the maximum encoded video quality assessment value (Vqcmax) calculated by the maximum encoded video quality estimation unit 222, at the minimum encoded video quality assessment value (Vqcmin) calculated by the unit 223 minimum encoded video quality estimation, I frame bit quantity (BitsI), P frame bit quantity (BitsP), and B frame bit quantity (BitsB) calculated by bit quantity calculation unit of video frame type 206, the I-frame average bit quantity (BitsIave) calculated by the I-frame average bit quantity estimation unit 211, the maximum bit quantity of the I frame (BitsImax) calculated by I frame maximum bit quantity estimating unit 212, the I frame minimum bit quantity (BitsImin) calculated by I frame minimum bit quantity estimating unit 213, the bit quantity P frame average (BitsPave) calculated by P frame average bit quantity estimating unit P 214, the maximum P frame bit quantity (BitsPmax) calculated by P frame maximum bit bit quantity estimating unit 215, the quantity P-frame minimum bit rate (BitsPmin) calculated by P frame minimum bit quantity estimation unit 216, B frame average bit quantity (BitsBave) calculated by B frame average bit quantity estimation unit 217, the maximum B frame bit quantity (BitsBmax) calculated by the B frame maximum bit quantity estimating unit 218, and the minimum B frame bit quantity (BitsBmin) calculated by the quantity estimating unit minimum bit rate of B-frame 219 (S215).
[000339] When the bit amount of I frame (BitsI) is greater than the average bit amount of I frame (BitsIave), the difference encoded video quality evaluation value (dVqc) becomes proportional to (Vqc max - Vqcave)-(BitsI - BitsIave)/(BitsImax - BitsIave). When the bit amount of I frame (BitsI) is less than the average bit amount of I frame (BitsIave), the difference encoded video quality evaluation value (dVqc) becomes proportional to (Vqcmin - Vqcave)- (BitsI - BitsIave)/(BitsImin - BitsIave).
[000340] When the bit quantity of the P frame (BitsP) is greater than the average bit quantity of the P frame (BitsPave), the difference encoded video quality evaluation value (dVqc) becomes proportional to (Vqcmin - Vqcave)D(BitsP - BitsPave)/(BitsPmin - BitsPave). When the bit quantity of the P frame (BitsP) is less than the average bit quantity of the P frame (BitsPave), the difference encoded video quality evaluation value (dVqc) becomes proportional to (Vqcmax - Vqcave)D (BitsP - BitsPave)/(BitsPmax - BitsPave).
[000341] When the bit quantity of the B frame (BitsB) is greater than the bit quantity of the average B frame (BitsBave), the difference encoded video quality evaluation value (dVqc) becomes proportional to (Vqcmin - Vqcave)D(BitsB - BitsBave)/(BitsBmin - BitsBave). When the bit quantity of B frame (BitsB) is less than the bit quantity of the average B frame (BitsBave), the difference encoded video quality evaluation value (dVqc) becomes proportional to (Vqcmax - Vqcave)D (BitsB - BitsBave)/(BitsBmax - BitsBave).
[000342] The difference coded video quality estimation unit 224 can derive the difference coded video quality evaluation value (dVqc) using equation (36) representing these characteristics from the coded video quality evaluation value of difference: dVqc = u31 + u32-X + u33-Y + u34-Z ...(36) where dVqc is the difference encoded video quality evaluation value, X is the degree of influence of the bit quantity of the frame I on the difference encoded video quality evaluation value, Y is the degree of influence of the bit quantity of the P frame on the difference encoded video quality evaluation value, Z is the degree of influence of the bit quantity of the frame B in the difference encoded video quality evaluation value, and u31,..., u34 are the characteristic coefficients.
[000343] The difference encoded video quality estimating unit 224 outputs the derivative difference encoded video quality evaluation value (dVqc) to the encoded video quality estimating unit 225.
[000344] X, Y, and Z in equation (36) can be derived using equations (37). [For BitsI > BitsIave] X = (Vqcmax - Vqcave)(BitsI - BitsIave)/(BitsImax - BitsIave) [For BitsI < BitsIave] X = (Vqcmin - Vqcave)(BitsI - BitsIave)/(BitsImin - BitsIave) [For BitsP < BitsPave] Y = (Vqcmax - Vqcave)(BitsP - BitsPave)/(BitsPmax - BitsPave) [For BitsP > BitsPave] Y = (Vqcmin - Vqcave)(BitsP - BitsPave)/(BitsPmin - BitsPave) [For < BitsBave] Z = (Vqcmax - Vqcave)(BitsB - BitsBave)/(BitsBmax - BitsBave) [For BitsB > BitsBave] Z = (Vqcmin - Vqcave)(BitsB - BitsBave)/(BitsBmin - BitsBave) ...(37)
[000345] The encoded video quality estimation unit 225 derives a encoded video quality assessment value (Vqc) using equation (38) based on the average encoded video quality assessment value (Vqcave) calculated by the unit coded video quality estimation average 221 and on the difference coded video quality estimation value (dVqc) calculated by difference coded video quality estimation unit 224 (S216)... Vqc = Vqcave + dVqc .. .(38)
[000346] The encoded video quality estimating unit 225 outputs the derived encoded video quality estimation value (Vqc) to the average packet loss video quality estimating unit 231, video quality estimating unit of 232 maximum packet loss, and 233 minimum packet loss video quality estimation unit.
[000347] As shown in Figure 18, the average packet loss video quality estimation unit 231 derives an average packet loss video quality assessment value (Vqave) based on the number of video frames lost ( DF) derived by packet analysis unit 20 and coded video quality evaluation value (Vqc) derived by coded video quality estimation unit 225 (S217).
[000348] The average packet loss (Vqave) video quality rating value has a characteristic in which it decreases as the number of lost video frames (DF) increases, and can be derived using equation (39 ) which represents this characteristic: Vqave = 1 + (Vqc - 1)((1 - u35)exp(-DF/u36) + u35exp(-DF/u37))...(39) where Vqc is the evaluation value of encoded video quality, Vqave is the average packet loss video quality evaluation value, DF is the number of video frames lost, and u35,..., u37 are the characteristic coefficients.
[000349] Average packet loss video quality estimation unit 231 outputs the derived average packet loss video quality assessment value (Vqave) to video quality estimation unit 235.
[000350] The maximum packet loss video quality estimation unit 232 derives a maximum packet loss (Vqmax) video quality estimation value based on the number of lost video frames (DF) derived by the unit. packet analysis 20 and the encoded video quality evaluation value (Vqc) derived by the encoded video quality estimation unit 225 (S218).
[000351] The maximum packet loss video quality rating value (Vqmax) has a characteristic in which it decreases as the number of lost video frames (DF) increases, and can be derived using equation (40 ) which represents this characteristic: Vqmax = 1 + (Vqc - 1)((1 - u38)exp(-DF/u39) + u39exp(-DF/u40))...(40) where Vqc is the evaluation value of encoded video quality, Vqmax is the maximum packet loss video quality evaluation value, DF is the number of video frames lost, and u38,..., u40 are the characteristic coefficients.
[000352] The maximum packet loss video quality estimation unit 232 outputs the derivative maximum packet loss video quality assessment value (Vqmax) to the video quality estimator unit 235.
[000353] Minimum packet loss video quality estimation unit 233 derives a minimum packet loss video quality assessment value (Vqmin) based on the number of lost video frames (DF) derived by the unit of packet analysis 20 and the encoded video quality evaluation value (Vqc) derived by the encoded video quality estimation unit 225 (S219).
[000354] The minimum packet loss (Vqmin) video quality rating value has a characteristic in which it decreases as the number of lost video frames (DF) increases, and can be derived using equation (41 ) which represents this characteristic: Vqmin = 1 + (Vqc - 1)((1 - u41)exp(-DF/u42) + u41exp(-DF/u43))...(41) where Vqc is the evaluation value of encoded video quality, Vqmin is the minimum packet loss video quality evaluation value, DF is the number of video frames lost, and u41,..., u43 are the characteristic coefficients.
[000355] Minimum packet loss video quality estimation unit 233 outputs the derived minimum packet loss video quality assessment value (Vqmin) to video quality estimation unit 235.
[000356] The difference packet loss video quality estimation unit 234 derives a difference packet loss (dVq) video quality assessment value based on the packet loss video quality assessment value average (Vqave) calculated by the average packet loss video quality estimation unit 231, in the maximum packet loss (Vqmax) video quality estimation value calculated by the maximum packet loss video quality estimation unit 232, in the minimum packet loss video quality evaluation value (Vqmin) calculated by the minimum packet loss video quality estimation unit 233, the amount of bits of the I frame (BitsI), amount of bits of the frame P (BitsP), and B frame bit quantity (BitsB) calculated by video frame type bit quantity calculation unit 206, I frame average bit quantity (BitsIave) calculated by quantity estimation unit d and I-frame average bits 211, the maximum I-frame bit amount (BitsImax) calculated by the I-frame maximum bit amount estimating unit 212, the minimum I-frame bit amount (BitsImin) calculated by the estimating unit I-frame minimum bit amount 213, P-frame average bit amount (BitsPave) calculated by P-frame average bit amount estimation unit 214, P-frame maximum bit amount (BitsPmax) calculated by unit P frame maximum bit quantity estimation 215, P frame minimum bit quantity (BitsPmin) calculated by P frame minimum bit quantity estimation unit 216, average B frame bit quantity (BitsBave) calculated by the B frame average bit quantity estimation unit 217, the maximum B frame bit quantity (BitsBmax) calculated by the B frame maximum bit quantity estimation unit 218, and the minimum B frame bit quantity ( BitsBmin) calculated by the B frame minimum bit quantity estimation unit 219 (S220).
[000357] When the amount of bits of I frame (BitsI) is greater than the average amount of bits of I frame (BitsIave), the video quality evaluation value of difference packet loss (dVq) becomes proportional to (Vqmax - Vqave)-(BitsI - BitsIave)/(BitsImax - BitsIave). When the bit amount of I frame (BitsI) is less than the average bit amount of I frame (BitsIave), the difference packet loss (dVq) video quality evaluation value becomes proportional to (Vqmin - Vqave)-(BitsI - BitsIave)/(BitsImin - BitsIave).
[000358] When the bit amount of P frame (BitsP) is greater than the average bit amount of P frame (BitsPave), the difference packet loss (dVq) video quality evaluation value becomes proportional to (Vqmin - Vqave)-(BitsP - BitsPave)/(BitsPmin - BitsPave). When the bit quantity of the P frame (BitsP) is less than the average bit quantity of the P frame (BitsPave), the video quality evaluation value of difference packet loss (dVq) becomes proportional to (Vqmax - Vqave)-(BitsP - BitsPave)/(BitsPmax - BitsPave).
[000359] When the amount of bits of B frame (BitsB) is greater than the amount of bits of average B frame (BitsBave), the difference packet loss (dVq) video quality evaluation value becomes proportional to (Vqmin - Vqave)-(BitsB - BitsBave)/(BitsBmin - BitsBave). When the bit quantity of the B frame (BitsB) is less than the bit quantity of the average B frame (BitsBave), the video quality evaluation value of difference packet loss (dVq) becomes proportional to (Vqmax - Vqave)-(BitsB - BitsBave)/(BitsBmax - BitsBave).
[000360] The difference packet loss video quality estimator unit 234 can derive the difference packet loss (dVq) video quality evaluation value using equation (42) representing these characteristics from the value of difference packet loss video quality rating: dVq = u44 + u45-S + u46-T + u47-U ...(42) where dVq is difference packet loss video quality rating value , S is the degree of influence of the amount of bits of the I frame on the video quality evaluation value of difference packet loss, T is the degree of influence of the amount of bits of the P frame on the video quality evaluation value of difference packet loss, U is the degree of influence of the amount of bits of frame B on the video quality evaluation value of difference packet loss, and u44,..., u47 are the characteristic coefficients.
[000361] The difference packet loss video quality estimation unit 234 can derive the difference packet loss (dVq) video quality estimation value for the video quality estimation unit 235.
[000362] S, T, and U in equation (42) can be derived using equations (43). [For BitsI > BitsIave] S = (Vqmax - Vqave)(BitsI - BitsIave)/(BitsImax - BitsIave) [For BitsI < BitsIave] S = (Vqmin - Vqave)(BitsI - BitsIave)/(BitsImin - BitsIave) [For BitsP < BitsPave] T = (Vqmax - Vqave)(BitsP - BitsPave)/(BitsPmax - BitsPave) [For BitsP > BitsPave] T = (Vqmin - Vqave)(BitsP - BitsPave)/(BitsPmin - BitsPave) [ BitsB < BitsBave] U = (Vqmax - Vqave)(BitsB - BitsBave)/(BitsBmax - BitsBave) [For BitsB > BitsBave] U = (Vqmin - Vqave)(BitsB - BitsBave)/(BitsBmin - BitsBave) ...(43)
[000363] After the difference packet loss video quality estimation unit 234 derives the difference packet loss (dVq) video quality estimation value, the video quality estimation unit 235 derives the value video quality (Vq) rating of video content using equation (44) based on the average packet loss (Vqave) video quality rating value calculated by the packet loss video quality estimation unit mean 231 and on the difference packet loss (dVq) video quality evaluation value calculated by difference packet loss video quality estimation unit 234 (S221): Vq = Vqave + dVq ...(44 )
[000364] As the characteristic coefficients (u1,..., u47) used to derive the average bit quantities, maximum bit quantities, and minimum bit quantities of the respective video frame types, the quality evaluation value of average encoded video, the maximum encoded video quality assessment value, the minimum encoded video quality assessment value, the difference encoded video quality assessment value, the loss loss video quality assessment value. average packet, the maximum packet loss video quality assessment value, the minimum packet loss video quality assessment value, and the difference packet loss video quality assessment value, characteristic coefficients relevant data are selected from a quality characteristic coefficient database in a storage unit (not shown) arranged in the video quality estimator 2.
[000365] In an example of the quality feature coefficient database shown in Figure 19, the feature coefficient is described in correspondence with a prerequisite such as video CODEC method.
[000366] The video quality evaluation value depends on the implementation of a video CODEC. For example, the video quality evaluation value differs between an H.264 encoded video content and an MPEG2 encoded video content even at the same bit rate. Similarly, the video quality evaluation value depends on prerequisites including video format and frame rate. In the quality characteristic coefficient database shown in Figure 19, the characteristic coefficient is described for each prerequisite.
[000367] In this way, the video quality evaluation value of each video in video communication services can be estimated based on the header information of an input packet using a bit rate extracted from the input packet, quantities derived bits for the respective video frame types after specifying the video frame types, and the number of video frames lost. When deriving a video quality value, arithmetic processing does not need to be performed for all the pixels that make up a video frame. In other words, the video quality value can be derived by performing arithmetic processing for packet header information which is a relatively small amount of information. This can suppress arithmetic processing cost.
[000368] The video communication service provider can easily determine whether a service being provided maintains a predetermined or higher quality for the user, and can capture and manage in real time the real quality of the service being provided. Third Mode
[000369] A video quality estimating apparatus according to the third embodiment of the present invention has the same arrangement as the video quality estimating apparatus 2 described in the second embodiment. In addition, the video quality estimator, according to the third modality, implements an objective video quality assessment by deriving a video quality assessment value that quantitatively represents the video quality using the feature. of a specific video frame type.
[000370] In the following description, the video quality estimating apparatus derives a video quality evaluation value that uses the characteristics of I frames out of I, P, and B frames that serve as video frame types.
[000371] As shown in Figure 20, a video quality estimating apparatus 3, according to the modality, includes a packet analysis unit 20 that derives a bit rate, lost video frame, and bit quantity from the I frame in an input packet, a frame characteristic estimation unit 21 which derives the frame characteristics of the I frames, an encoding quality estimator unit 22 which derives an encoded video quality evaluation value from the bit rate, the amount of bits of the I frame, and the frame characteristics of the I frames, and the packet loss quality estimation unit 23 which derives a video quality evaluation value from the number of video frames lost , the encoded video quality evaluation value, and the frame characteristics of the I-frames.
[000372] Note that the construction components of the video quality estimating apparatus 3, according to the embodiment, have the same arrangement and functions as those of the video quality estimating apparatus 2 described in the second embodiment. Thus, the same numerical references denote the same parts, and a detailed description of these will not be repeated.
[000373] A video quality evaluation value derivation operation through the video quality estimating apparatus 3, according to the modality, will be described with reference to Figures 21 and 22.
[000374] As shown in Figure 21, the packet analysis unit 20 of the video quality estimating apparatus 3 captures an input packet (S301). The packet analysis unit 20 derives the bit rate (BR) of an encoded video packet, the amount of bits of the I frame (BitsI), and the number of lost video frames (DF) from the captured packet ( S302).
[000375] Bit rate (BR) derived by packet analysis unit 20 is input into frame characteristic estimation unit 21 and encoding quality estimation unit 22. I frame bit quantity (BitsI) is entered into encoding quality estimating unit 22 and packet loss quality estimating unit 23. The number of lost video frames (DF) is entered into packet loss quality estimating unit 23.
[000376] After the bit rate (BR) derived by the packet analysis unit 20 is entered into the frame feature estimation unit 21, an I-frame average bit quantity estimation unit 211 of the feature estimation unit Frame 21 derives an I-frame average bit amount (BitsIave) based on the entered bit rate (BR) (S303).
[000377] The average bit amount of the I frame (BitsIave) has a characteristic in which it increases as the bit rate (BR) increases, and can be derived using equation (50) representing this characteristic: BitsIave = w1 + w2exp(-BR/w3) ...(50) where BitsIave is the average amount of bits in the I frame, BR is the bit rate, and w1,..., w3 are the characteristic coefficients.
[000378] The I-frame average bit quantity estimating unit 211 outputs the derived I frame average bit quantity (BitsIave) to the encoding quality estimating unit 22 and packet loss quality estimating unit 23 .
[000379] After the bit rate (BR) derived by the packet analysis unit 20 is entered into the frame feature estimation unit 21, a maximum bit quantity estimation unit of I-frame 212 of the feature estimation unit Frame 21 derives a maximum bit amount of the I frame (BitsImax) based on the entered bit rate (BR) (S304).
[000380] The maximum bit amount of the I frame (BitsImax) has a characteristic in which it increases as the bit rate (BR) increases, and can be derived using equation (51) representing this characteristic: BitsImax = w4 + w5exp(-BR/w6) ...(51) where BitsImax is the maximum amount of bits in the I frame, BR is the bitrate, and w4,..., w6 are the characteristic coefficients.
[000381] I-frame maximum bit quantity estimation unit 212 outputs the derived I frame maximum bit quantity (BitsImax) to encoding quality estimating unit 22 and packet loss quality estimating unit 23 .
[000382] After the bit rate (BR) derived by the packet analysis unit 20 is entered into the frame feature estimation unit 21, an I-frame minimum bit quantity estimation unit 213 of the feature estimation unit Frame 21 derives a minimum amount of bits from the I frame (BitsImin) based on the entered bit rate (BR) (S305).
[000383] The minimum bit amount of the I frame (BitsImin) has a characteristic in which it increases as the bit rate (BR) increases, and can be derived using equation (52) that represents this characteristic: BitsImin = w7 + w8exp(-BR/w9) ...(52) where BitsImin is the minimum amount of bits in the I frame, BR is the bitrate, and w7,..., w9 are the characteristic coefficients.
[000384] Minimum I-frame bit quantity estimation unit 213 outputs the derived I-frame minimum bit quantity (BitsImin) to encoding quality estimating unit 22 and packet loss quality estimating unit 23 .
[000385] After the bit rate (BR) derived by the packet analysis unit 20 is entered into the encoding quality estimating unit 22, an average encoded video quality estimating unit 221 of the encoding quality estimating unit 22 derives an average encoded video quality evaluation value (Vqcave) based on the entered bit rate (BR) (S306).
[000386] The average encoded video quality evaluation value (Vqcave) has a characteristic in which it increases as the bit rate (BR) increases, and can be derived using equation (53) or (54) which represents this characteristic: Vqcave = w10 + w11exp(-BR/w12) ...(53) or Vqcave = 1 + w10 - w10/(1 + (BR/w11)w12) ...(54) where Vqcave is the average encoded video quality evaluation value, BR is bit rate, and w10,..., w12 are characteristic coefficients.
[000387] The average encoded video quality estimating unit 221 outputs the derived average encoded video quality assessment value (Vqcave) to an encoded video quality estimating unit 225.
[000388] After the bit rate (BR) derived by the packet analysis unit 20 is entered into the encoding quality estimating unit 22, a maximum encoded video quality estimating unit 222 of the encoding quality estimating unit 22 derives a maximum encoded video quality evaluation value (Vqcmax) based on the entered bit rate (BR) (S307).
[000389] The maximum encoded video quality evaluation value (Vqcmax) has a characteristic that it increases as the bit rate (BR) increases, and can be derived using equation (55) or (56) that represents this characteristic: Vqcmax = w13 + w14exp(-BR/w15) ...(55) or Vqcmax = 1 + w13 - w14/(1 + (BR/w14)w15) ...(56) where Vqcmax is the maximum encoded video quality evaluation value, BR is bit rate, and w13,..., w15 are characteristic coefficients.
[000390] The maximum coded video quality estimator unit 222 outputs the derived maximum coded video quality estimation value (Vqcmax) to the coded video quality estimator unit 225.
[000391] After the bit rate (BR) derived by the packet analysis unit 20 is entered into the encoding quality estimating unit 22, the minimum encoded video quality estimating unit 223 of the encoding quality estimating unit 22 derives a minimum encoded video quality evaluation value (Vqcmin) based on the entered bit rate (BR) (S308).
[000392] The minimum encoded video quality evaluation value (Vqcmin) has a characteristic in which it increases as the bit rate (BR) increases, and can be derived using equation (57) or (58) that represents this characteristic: Vqcmin = w16 + w17exp(-BR/w18) ...(57) or Vqcmin = 1 + w16 - w16/(1 + (BR/w17)w18) ...(58) where Vqcmin is the minimum encoded video quality evaluation value, BR is bit rate, and w16,..., w18 are characteristic coefficients.
[000393] Minimum coded video quality estimation unit 223 outputs the derived minimum coded video quality assessment value (Vqcmin) to coded video quality estimation unit 225.
[000394] A difference coded video quality estimation unit 224 of the coded quality estimation unit 22 derives a difference coded video quality evaluation value (dVqc) based on the coded video quality evaluation value average (Vqcave) calculated by the average encoded video quality estimating unit 221, in the maximum encoded video quality estimation value (Vqcmax) calculated by the maximum encoded video quality estimating unit 222, in the quality assessment value of Minimum encoded video (Vqcmin) calculated by minimum encoded video quality estimating unit 223, the bit quantity of I frame (BitsI) calculated by the bit quantity calculation unit of video frame type 206, the quantity of bits I-frame average (BitsIave) calculated by the I-frame average bit quantity estimation unit 211, the maximum I-frame bit quantity (BitsImax) calculated. The I-frame maximum bit quantity estimation unit 212, and the I-frame minimum bit quantity (BitsImin) calculated by the I-frame minimum bit quantity estimation unit 213 (S309).
[000395] When the bit amount of I frame (BitsI) is greater than the average bit amount of I frame (BitsIave), the difference encoded video quality evaluation value (dVqc) becomes proportional to (Vqcmax - Vqcave)D (BitsI - BitsIave)/(BitsImax - BitsIave).
[000396] When the amount of bits of I frame (BitsI) is less than the average amount of bits of I frame (BitsIave), the difference encoded video quality evaluation value (dVqc) becomes proportional to (Vqcmin - Vqcave)D (BitsI - BitsIave)/(BitsImin - BitsIave).
[000397] Equation (57) represents the characteristic of the difference encoded video quality evaluation value (dVqc), and the difference encoded video quality estimation unit 224 can derive the encoded video quality evaluation value of difference (dVqc) using equation (57): dVqc = w19 + w20-x ...(57) where dVqc is the difference encoded video quality evaluation value, x is the degree of influence of bit quantity of frame I in the difference encoded video quality evaluation value, and w19 and w20 are the characteristic coefficients.
[000398] The difference encoded video quality estimating unit 224 outputs the derived difference encoded video quality evaluation value (dVqc) to the encoded video quality estimating unit 225. x in equation (57) can be derived using the equations (58). (For BitsI > BitsIave) x = (Vqcmax - Vqcave)-(BitsI - BitsIave)/(BitsImax - BitsIave) (For BitsI < BitsIave) x = (Vqcmin - Vqcave)-(BitsI - BitsIave)/(BitsImin - BitsIave) ...(58)
[000399] The encoded video quality estimation unit 225 derives the average encoded video quality assessment value (Vqcave) calculated by the average encoded video quality estimation unit 221 and the encoded video quality assessment value from difference (dVqc) calculated by the encoded video quality estimation unit of difference 224 using equation (59) (S310): Vqc = Vqcave + dVqc ...(59)
[000400] The encoded video quality estimation unit 225 derives the average encoded video quality assessment value (Vqcave) calculated by the average encoded video quality estimation unit 221 and the encoded video quality assessment value from difference (dVqc) calculated by the encoded video quality estimation unit of difference 224 using equation (59) (S310):
[000401] Page 109 - line 22 of the original
[000402] The encoded video quality estimation unit 225 derives a encoded video quality assessment value (Vqc) using equation (59) based on the average encoded video quality assessment value (Vqcave) calculated by the unit coded video quality estimation average 221 and on the difference coded video quality estimation value (dVqc) calculated by difference coded video quality estimation unit 224 (S310): Vqc = Vqcave + dVqc ...( 59)
[000403] The encoded video quality estimating unit 225 outputs the derived encoded video quality assessment value (Vqc) to the packet loss quality estimating unit 23.
[000404] As shown in Figure 22, after the encoded video quality estimation value (Vqc) derived by the encoded video quality estimating unit 225 is output to the packet loss quality estimating unit 23, one unit Average packet loss video quality estimation 231 of the packet loss quality estimation unit 23 derives an average packet loss video quality assessment value (Vqave) based on the number of video frames lost ( DF) derived by packet analysis unit 20 and coded video quality evaluation value (Vqc) derived by coded video quality estimation unit 225 (S311).
[000405] The average packet loss (Vqave) video quality evaluation value has a characteristic that it decreases as the number of lost video frames (DF) increases, and can be derived using equation (60 ) which represents this characteristic: Vqave = 1 + (Vqc - 1).((1 - w21)exp(-DF/w22) + w21exp(-DF/w23)) ...(60) where Vqc is the value of encoded video quality rating, Vqave is the average packet loss video quality rating value, DF is the number of video frames lost, and w21,..., w23 are the characteristic coefficients.
[000406] Average packet loss video quality estimation unit 231 outputs the derived average packet loss video quality assessment value (Vqave) to a video quality estimation unit 235.
[000407] A maximum packet loss video quality estimation unit 232 derives a maximum packet loss video quality assessment value (Vqmax) based on the number of lost video frames (DF) derived by the unit of packet analysis 20 and the encoded video quality evaluation value (Vqc) derived by the encoded video quality estimation unit 225 (S312).
[000408] The maximum packet loss video quality evaluation value (Vqmax) has a characteristic that it decreases as the number of lost video frames (DF) increases, and can be derived using equation (61 ) which represents this characteristic: Vqmax = 1 + (Vqc - 1)-((1 - w24)exp(-DF/w25) + w25exp(-DF/w26)) ...(61) where Vqc is the value of encoded video quality rating, Vqmax is the maximum packet loss video quality rating value, DF is the number of lost video frames, and w24,..., w26 are the characteristic coefficients.
[000409] The maximum packet loss video quality estimator unit 232 outputs the derived maximum packet loss video quality assessment value (Vqmax) to the video quality estimator unit 235.
[000410] A minimum packet loss video quality estimation unit 233 derives a minimum packet loss video quality assessment value (Vqmin) based on the number of lost video frames (DF) derived by the unit of packet analysis 20 and the encoded video quality evaluation value (Vqc) derived by the encoded video quality estimation unit 225 (S313).
[000411] The minimum packet loss (Vqmin) video quality rating value has a characteristic that it decreases as the number of lost video frames (DF) increases, and can be derived using equation (62 ) which represents this characteristic: Vqmin = 1 + (Vqc - 1)((1 - w27)exp(-DF/w28) + w27exp(-DF/w29)) ...(62) where Vqc is the evaluation value of encoded video quality, Vqmin is the minimum packet loss video quality evaluation value, DF is the number of video frames lost, and w27,..., w29 are the characteristic coefficients.
[000412] Minimum packet loss video quality estimation unit 233 outputs the derived minimum packet loss video quality assessment value (Vqmin) to video quality estimation unit 235.
[000413] A difference packet loss video quality estimation unit 234 derives a difference packet loss (dVq) video quality assessment value based on the packet loss video quality assessment value average (Vqave) calculated by the average packet loss video quality estimation unit 231, in the maximum packet loss (Vqmax) video quality estimation value calculated by the maximum packet loss video quality estimation unit 232, in the minimum packet loss video quality evaluation value (Vqmin) calculated by the minimum packet loss video quality estimation unit 233, the amount of I-frame bits (BitsI) calculated by the calculation unit of bit quantity of video frame type 206, the average bit quantity of I frame (BitsIave) calculated by the average bit quantity estimator of I frame 211, the maximum bit quantity of I frame (BitsImax) calculated by I frame maximum bit quantity estimating unit 212, and I frame minimum bit quantity (BitsImin) calculated by I frame minimum bit quantity estimating unit 213 (S314).
[000414] When the amount of bits of I frame (BitsI) is greater than the average amount of bits of I frame (BitsIave), the difference packet loss (dVq) video quality evaluation value becomes proportional to (Vqmax - Vqave)-(BitsI - BitsIave)/(BitsImax - BitsImax).
[000415] When the amount of bits of I frame (BitsI) is less than the average amount of bits of I frame (BitsIave), the video quality evaluation value of difference packet loss (dVq) becomes proportional to (Vqmin - Vqave)D(BitsI - BitsIave)/(BitsImin - BitsImax).
[000416] Equation (63) represents the characteristic of the difference packet loss video quality evaluation value (dVq), and the difference packet loss video quality estimator unit 234 can derive the value of difference packet loss (dVq) video quality rating using equation (63): dVq = w30 + w31«s ...(63) where dVq is the difference packet loss video quality rating value difference, s is the degree of influence of I-frame bit quantity on the difference packet loss video quality evaluation value, and w30 and w31 are the characteristic coefficients.
[000417] The difference packet loss video quality estimation unit 234 outputs the derivative difference packet loss video quality assessment value (dVq) to the video quality estimation unit 235.
[000418]s in equation (63) can be derived using equations (64). (For BitsI > BitsIave) s = (Vqmax - Vqave)-(BitsI - BitsIave)/(BitsImax - BitsIave) (For BitsI < BitsIave) s = (Vqmin - Vqave)-(BitsI - BitsIave)/(BitsImin - BitsIave) ...(66)
[000419] The video quality estimation unit 235 derives the video quality evaluation value (Vq) of the video content using equation (65) based on the average packet loss video quality evaluation value ( Vqave) calculated by the average packet loss video quality estimation unit 231 and in the difference packet loss (dVq) video quality estimation value calculated by the difference packet loss video quality estimation unit 234 (S315): Vq = Vqave + dVq ...(65)
[000420] As the characteristic coefficients (w1,..., w31) used to derive the average bit quantity, maximum bit quantity, and minimum bit quantity of I frames, the average encoded video quality evaluation value , the maximum encoded video quality assessment value, the minimum encoded video quality assessment value, the difference encoded video quality assessment value, the average packet loss video quality assessment value, the maximum packet loss video quality assessment value, the minimum packet loss video quality assessment value, and the difference packet loss video quality assessment value, relevant characteristic coefficients are selected to from a quality characteristic coefficient database in a storage unit (not shown) arranged in the video quality estimator 3.
[000421] In an example of the quality characteristic coefficient database shown in Figure 23, the characteristic coefficient is described in correspondence with a prerequisite such as the video CODEC method.
[000422] The video quality evaluation value depends on the implementation of a video CODEC. For example, the video quality evaluation value differs between an H.264 encoded video content and an MPEG2 encoded video content even at the same bit rate. Similarly, the video quality evaluation value depends on prerequisites including video format and frame rate. In the quality characteristic coefficient database shown in Figure 23, the characteristic coefficient is described for each prerequisite. Industrial Applicability
[000423] The present invention can be applied in a video quality estimating apparatus that estimates a video quality value in video communication, such as an IPTV service, video distribution service or videophone service provided through an IP network. Explanation of Numerical References 1. 2...video quality estimator, 10, 20...packet analysis unit, 11, 21...frame characteristic estimator, 12, 22... encoding quality estimation unit, 23...packet loss quality estimating unit.
权利要求:
Claims (10)
[0001]
1. Video quality estimating device (1;2;3) characterized in that it comprises: a packet analysis unit (10;20) that analyzes a coded video packet and calculates a bit rate (BR) of the input encoded video packet and calculates an amount of bits (BitsI,BitsP,BitsB) for at least one type of video frame out of a plurality of video frame types of the incoming encoded video packet, the plurality of types video frame including I-frame, P-frame, and B-frame type frames; a frame characteristic estimation unit (11;21) which receives the bit rate (BR) calculated in the packet analysis unit (10;20) and estimates, based on the bit rate (BR), a characteristic of frame (BitsIave,BitsPave,BitsBave,BitsImax,BitsPmax,BitsBmax,BitsImin,Bits Pmin,BitsBmin) representing a characteristic of bit quantity including an average bit quantity (BitsIave,BitsPave,BitsBave), a maximum bit quantity (BitsImax, BitsPmax,BitsBmax), and a minimum amount of bits (BitsImin,BitsPmin,BitsBmin) of the at least one of the plurality of frame types, the frame characteristic estimating unit (11;21) estimates the frame characteristic using a equation representing a characteristic that the average amount of bits (BitsIave,BitsPave,BitsBave), the maximum amount of bits (BitsImax,BitsPmax,BitsBmax) and the minimum amount of bits (BitsImin,BitsPmin,BitsBmin) increase as the rate of bits (BR) increases; and an encoding quality estimation unit (12;22) that estimates a video quality value (Vq;Vqc) of a video to be estimated, based on the bit rate (BR) of the encoded video packet and a amount of bits (BitsI,BitsP,BitsB) of the at least one of the plurality of frame types that was calculated by the packet analysis unit (10;20), and the frame characteristic (BitsIave,BitsPave,BitsBave,BitsImax, BitsPmax,BitsBmax,BitsImin, BitsPmin,BitsBmin) of the at least one of the plurality of frame types that was estimated by the frame characteristic estimation unit (11;21) using a ratio of a difference between the amount of bits (BitsI ,BitsP,BitsB) and the average bit quantity (BitsIave,BitsPave,BitsBave) for a difference between the maximum bit quantity (BitsImax,BitsPmax,BitsBmax) or the minimum bit quantity (BitsImin,BitsPmin,BitsBmin), and the average amount of bits (BitsIave,BitsPave,BitsBave).
[0002]
2. Video quality estimating apparatus according to claim 1, characterized in that the packet analysis unit (10; 20) includes a video packet specification unit (101; 201) that specifies a packet of arbitrary encoded video contained in an input packet as the incoming encoded video packet based on a unique packet ID (PID) for the encoded video packet, a encoding quantity calculation unit (102;202) that calculates the bit rate (BR) of the encoded video packet specified by the video packet specification unit (101;201), a frame delimiting position extraction unit (103;203) which extracts information indicating a delimiter of a video frame from information contained in the encoded video packet specified by the video packet specification unit (101;201), a specific frame start position extracting unit (104;204) extracting information indicating a starting position of a specific video frame; which serves as information to identify the type of video frame from the video frame specified by the video packet specification unit (101;201), a video frame bit quantity calculation unit (105;205) ) which calculates a quantity of bits of a video frame from a quantity of bits between pieces of information indicating delimiters of the video frame which were extracted by the frame delimiting position extracting unit (103;203), and a unit video frame type bit quantity calculation method (106;206) which calculates a bit quantity (BitsI,BitsP,BitsB) of the at least one of the plurality of frame types from the information indicating that the start position of the specific video frame that was extracted by the specific frame start position extracting unit (104;204), and the bit amount of the video frame that was calculated by the video frame bit amount calculation unit (105;205).
[0003]
3. Video quality estimating apparatus according to claim 1, characterized in that the encoding quality estimating unit (12; 22) includes a video quality characteristic estimating unit (120; 220 ) which estimates a video quality characteristic (Vqave,Vqmax,Vqmin,Vqcave,Vqcmax,Vqcmin) representing a maximum value (Vqmax,Vqcmax), a minimum value (Vqmin,Vqcmin), and an average value (Vqave,Vqcmin) of a video quality value from the bit rate (BR) calculated by the packet analysis unit (10;20), a difference video quality estimation unit (124;224) estimating a difference one value of difference video quality (dVq;dVqc) from the amount of bits (BitsI,BitsP,BitsB) of the at least one of the plurality of frame types that was calculated by the packet analysis unit (10;20), of the frame characteristic (BitsIave,BitsPave,BitsBave,BitsImax,BitsPmax,BitsBmax,BitsImin,Bits Pmin,BitsBm in) which was estimated by the frame characteristic estimation unit (11;21), and the video quality characteristic (Vqave,Vqmax,Vqmin,Vqcave,Vqcmax,Vqcmin) which was estimated by the quality characteristic estimation unit of video (120;220), and a video quality estimation unit (125;225) that estimates a desired video quality value (Vq) by adding the difference video quality value (dVq) estimated by the unit difference video quality estimation value (124;224) and the average value (Vqave) of the video quality value that was estimated by the video quality characteristic estimation unit (120;220).
[0004]
4. Video quality estimating apparatus according to claim 2, characterized in that the packet analysis unit (20) further includes a packet loss frame specification unit (207) that specifies packet loss packet, from the encoded video packet specified by the video packet specification unit (201) and the information indicating the delimiter of the video frame that was extracted by the frame delimiting position extracting unit (203), a unit of number of video frames lost calculation (208) which calculates the number of video frames (DF) lost by packet loss based on a video frame type determined by the amount of bits of the at least one of the plurality of types of frame that was calculated by the bit quantity calculation unit of video frame type (206), information indicating a video frame position, and the lost packets that were specified by the space unit. packet loss frame encoding (207), and a packet loss quality estimation unit (23) that estimates a video quality value (Vq) quantitatively representing the quality of an encoded video that has been affected by loss degradation. of packet, based on the video quality value (Vqc) estimated by the encoding quality estimation unit (22), the amount of bits (BitsI,BitsP,BitsB) of the at least one of the plurality of frame types and the number of video frames (DF) lost, and the frame characteristic (BitsIave,BitsPave,BitsBave,BitsImax,BitsPmax,BitsBmax,BitsImin,BitsPmin,BitsBmin) of the at least one of the plurality of frame types that was estimated by the frame characteristic estimation unit (21).
[0005]
5. Video quality estimating apparatus according to claim 4, characterized in that the packet loss quality estimating unit (23) includes an average packet loss video quality estimating unit ( 231) which estimates an average packet loss video quality assessment value (Vqave) based on the number of video frames (DF) lost calculated by the packet analysis unit (20) and the video quality value ( Vqc) estimated by the encoding quality estimating unit (22), a maximum packet loss video quality estimating unit (232) that estimates a maximum packet loss (Vqmax) video quality assessment value with based on the number of video frames (DF) lost calculated by the packet analysis unit (20) and the video quality value (Vqc) estimated by the encoding quality estimating unit (22), a quality estimating unit of min packet loss video ima (233) which estimates a minimum packet loss (Vqmin) video quality judgment value representing a minimum video quality judgment value value based on the number of video frames (DF) lost calculated by the unit of packet analysis (20) and the video quality value (Vqc) estimated by the encoding quality estimating unit (22), a difference packet loss video quality estimating unit (234) that estimates a value of difference packet loss video quality evaluation (dVq) from the amount of bits (BitsI,BitsP,BitsB) of the at least one of the plurality of frame types that was calculated by the packet analysis unit (20 ), the average bit quantity (BitsIave,BitsPave,BitsBave), the maximum bit quantity (BitsImax,BitsPmax,BitsBmax) and the minimum bit quantity (BitsImin,BitsPmin,BitsBmin) of the at least one of the plurality of types table that were estimated by the estimation unit of frame characteristics (21), the average packet loss video quality assessment value (Vqave) estimated by the average packet loss video quality estimation unit (231), the video quality assessment value of maximum packet loss (Vqmax) estimated by the maximum packet loss video quality estimation unit (232), and the minimum packet loss (Vqmin) video quality assessment value estimated by the quality estimation unit of minimum packet loss video (233), and a packet loss video quality estimator (235) that estimates a packet video quality (Vq) judgment value by adding the quality judgment value of average packet loss (Vqave) video estimated by the average packet loss video quality estimation unit (231) and difference packet loss (dVq) video quality assessment value estimated by the estimation unit of p video quality difference pack er (235).
[0006]
6. Video quality estimation method characterized by the fact that it comprises the steps of: a packet analysis step of analyzing an encoded video packet and calculating the bit rate (BR) of the incoming encoded video packet, and calculate an amount of bits (BitsI,BitsP,BitsB) for at least one type of video frame out of a plurality of video frame types of the input encoded video packet, the plurality of video frame types including the frames of type I frame, P frame, and B frame type; a frame characteristic estimation step of receiving the bit rate (BR) calculated in the packet analysis step and estimating, based on the bit rate (BR), a frame characteristic (BitsIave,BitsPave,BitsBave,BitsImax, BitsPmax,BitsBmax,BitsImin,Bits Pmin,BitsBmin) representing a bit quantity characteristic including an average bit quantity (BitsIave,BitsPave,BitsBave), a maximum bit quantity (BitsImax,BitsPmax,BitsBmax), and a bit quantity minimum (BitsImin,BitsPmin,BitsBmin) of the at least one of the plurality of frame types, the frame characteristic estimation step by estimating the frame characteristic using an equation representing a characteristic that the average amount of bits (BitsIave,BitsPave, BitsBave), the maximum bit quantity (BitsImax,BitsPmax,BitsBmax) and the minimum bit quantity (BitsImin,BitsPmin,BitsBmin) increase as the bit rate (BR) increases; and an encoding quality estimation step of estimating a video quality value (Vq;Vqc) of a video to be estimated, based on the bit rate (BR) of the encoded video packet and a quantity of bits (BitsI ,BitsP,BitsB) of the at least one of the plurality of frame types that was calculated in the packet analysis step, and the frame characteristic (BitsIave,BitsPave,BitsBave,BitsImax,BitsPmax,BitsBmax,BitsImin,Bits Pmin,BitsBmin ) of the at least one of the plurality of frame types that was estimated in the frame characteristic estimation step using a ratio of one difference between the bit quantity (BitsI,BitsP,BitsB) and the average bit quantity (BitsIave, BitsPave,BitsBave) for a difference between the maximum bit quantity (BitsImax,BitsPmax,BitsBmax) or the minimum bit quantity (BitsImin, BitsPmin, BitsBmin), and the average bit quantity (BitsIave,BitsPave,BitsBave).
[0007]
7. Video quality estimation method according to claim 6, characterized in that the packet analysis step includes a video packet specification step of specifying an arbitrary encoded video packet contained in a packet of input as the input encoded video packet based on a unique packet ID (PID) for the encoded video packet, a encoding amount calculation step of calculating the bit rate (BR) of the specified encoded video packet in the video packet specification step, a frame delimiting position extract step of extracting information indicating a delimiter of a video frame from information contained in the encoded video packet specified in the video packet specification step, a specific frame start position extraction step of extracting information indicating a start position of a specific video frame; which serves as information to identify the type of video frame, from the video frame specified in the video packet specification step, a video frame bit quantity calculation step of calculating a video frame bit quantity of video from a quantity of bits between pieces of information indicating video frame delimiters that were extracted in the frame delimiting position extract step, and a video frame type bit quantity calculation step of calculating a amount of bits (BitsI,BitsP,BitsB) of the at least one of the plurality of frame types from the information indicating that the start position of the specific video frame that was extracted in the specific frame start position extraction step, and the video frame bit quantity that was calculated in the video frame bit quantity calculation step.
[0008]
8. Video quality estimation method according to claim 6, characterized in that the encoding quality estimation step includes a video quality characteristic estimation step of estimating a video quality characteristic ( Vqave,Vqmax,Vqmin,Vqcave,Vqcmax,Vqcmin) representing a maximum value (Vqmax,Vqcmax), a minimum value (Vqmin,Vqcmin), and an average value (Vqave,Vqcave) of a video quality value from bit rate (BR) calculated in the packet analysis step, a difference video quality estimation step of estimating a difference a difference video quality value (dVq;dVqc) from the amount of bits (BitsI, BitsP,BitsB) of the at least one of the plurality of frame types that was calculated in the packet analysis step, of the frame characteristic (BitsIave,BitsPave,BitsBave,BitsImax,BitsPmax,BitsBmax,BitsImin,Bits Pmin,BitsBmin) that was estimated in the feature estimation step. of frame, and the video quality characteristic (Vqave,Vqmax,Vqmin,Vqcave,Vqcmax,Vqcmin) that was estimated in the video quality characteristic estimation step, and a video quality estimation step of estimating a value of video quality (Vq) of the video to be estimated by adding the difference video quality value (dVq) estimated in the difference video quality estimation step and the average value (Vqave) of the video quality value that was estimated in the video quality feature estimation step.
[0009]
9. Video quality estimation method according to claim 7, characterized in that the packet analysis step further includes a packet loss frame specification step of specifying packet loss from the packet of encoded video specified in the video packet specification step and the information indicating the delimiter of the video frame that was extracted in the frame delimiting position extraction step, a step of calculating the number of lost video frames of calculating the number of video frames (DF) lost by packet loss based on a video frame type determined by the bit quantity of the at least one of the plurality of frame types that was calculated in the type bit quantity calculation step of video frame, information indicating a video frame position, and the lost packets that were specified in the packet loss frame specification step, and an est step. packet loss quality image estimating a video quality value (Vq) quantitatively representing the quality of an encoded video that has been affected by packet loss degradation, based on the video quality value (Vqc) estimated in the step of estimation of coding quality, the amount of bits (BitsI,BitsP,BitsB) of the at least one of the plurality of frame types and the number of video frames (DF) lost, and the frame characteristic (BitsIave,BitsPave, BitsBave,BitsImax,BitsPmax,BitsBmax,BitsImin,BitsPmin,BitsBmin) of the at least one of the plurality of frame types that was estimated in the frame characteristic estimation step.
[0010]
10. Video quality estimation method according to claim 9, characterized in that the packet loss quality estimation step includes an average packet loss video quality estimation step of estimating a value of average packet loss (Vqave) video quality assessment based on the number of video frames (DF) lost calculated in the packet analysis step and the estimated video quality (Vqc) value in the quality estimation step of encoding, a maximum packet loss video quality estimation step of estimating a maximum packet loss (Vqmax) video quality assessment value based on the number of lost video (DF) frames calculated in the step of packet analysis and the video quality value (Vqc) estimated in the encoding quality estimation step, a minimum packet loss video quality estimation step of estimating a video quality assessment value of minimum packet loss (Vqmin) based on the number of video frames (DF) lost calculated in the packet analysis step and the video quality value (Vqc) estimated in the encoding quality estimation step, an estimation step difference packet loss video quality estimate of a difference packet loss (dVq) video quality evaluation value from the amount of bits (BitsI,BitsP,BitsB) of the at least one of the plurality of frame types that were calculated in the packet analysis step, the average bit quantity (BitsIave,BitsPave,BitsBave), the maximum bit quantity (BitsImax,BitsPmax,BitsBmax) and the minimum bit quantity (BitsImin,BitsPmin,BitsBmin ) of the at least one of the plurality of frame types that were estimated in the frame feature estimation step, the average packet loss (Vqave) video quality assessment value estimated in the video quality estimation step of mean packet loss day, the maximum packet loss video quality assessment value (Vqmax) estimated in the maximum packet loss video quality estimation step, and the minimum packet loss video quality assessment value (Vqmin) estimated in the minimum packet loss video quality estimation step, and a packet loss video quality estimation step of estimating a packet video quality (Vq) assessment value by adding the assessment value of average packet loss (Vqave) video quality estimated in the average packet loss video quality estimation step and difference packet loss (dVq) video quality assessment value estimated in the quality estimation step of difference packet loss video.
类似技术:
公开号 | 公开日 | 专利标题
BR112012008605B1|2021-09-08|VIDEO QUALITY ESTIMATION APPARATUS AND METHOD
EP2866447B1|2016-05-25|Method and apparatus for evaluating the quality of a video sequence by temporally synchronizing the encrypted input bit stream of a video decoder with the processed video sequence obtained by an external video decoder
Serral-Gracià et al.2010|An overview of quality of experience measurement challenges for video applications in IP networks
US7965650B2|2011-06-21|Method and system for quality monitoring of media over internet protocol |
US8514928B2|2013-08-20|Method and system for viewer quality estimation of packet video streams
RU2540846C2|2015-02-10|Video quality assessment technology
PT2432161E|2015-11-20|Method of and system for measuring quality of audio and video bit stream transmissions over a transmission chain
US7756136B2|2010-07-13|Spatial and temporal loss determination in packet based video broadcast system in an encrypted environment
US9723329B2|2017-08-01|Method and system for determining a quality value of a video stream
KR102059222B1|2019-12-24|Content-dependent video quality model for video streaming services
KR101783071B1|2017-09-28|Method and apparatus for assessing the quality of a video signal during encoding or compressing of the video signal
Yamada et al.2010|Accurate video-quality estimation without video decoding
BRPI0614591A2|2012-01-24|process for broadcasting multiple sequences of video data on a single channel
Hernando et al.2013|Evaluating quality of experience in IPTV services using MPEG frame loss rate
Mu et al.2010|A Discrete perceptual impact evaluation quality assessment framework for IPTV services
JP4787303B2|2011-10-05|Video quality estimation apparatus, method, and program
Wang et al.2010|Packet dropping for H. 264 videos considering both coding and packet-loss artifacts
KR101083063B1|2011-11-16|Method and apparatus for measuring video quality of experience
Uhl et al.2020|A New Parameterized Model for Determining Quality of Online Video Service Using Modern H. 265/HEVC and VP9 Codecs
Wang et al.2012|Video quality assessment for IPTV services: A survey
Reibman et al.2005|Video quality estimation for Internet streaming
Makowski2014|Quality of Variable Bitrate HD Video Transmission in New Generation Access Network
Reddy et al.2015|Video packet priority assignment based on spatio-temporal perceptual importance
Cho2012|A Study on Video Quality Estimation on IPTV under Lab Network Environment
同族专利:
公开号 | 公开日
EP2493205B1|2015-05-06|
US20120201310A1|2012-08-09|
BR112012008605A2|2016-04-05|
KR101359722B1|2014-02-05|
WO2011048829A1|2011-04-28|
CN102687517B|2015-03-25|
CN102687517A|2012-09-19|
JPWO2011048829A1|2013-03-07|
CA2776402A1|2011-04-28|
ES2537236T3|2015-06-03|
US9001897B2|2015-04-07|
EP2493205A1|2012-08-29|
KR20120054092A|2012-05-29|
CA2776402C|2016-05-24|
EP2493205A4|2012-12-26|
JP5519690B2|2014-06-11|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

JP4327674B2|2004-07-21|2009-09-09|日本電信電話株式会社|Video quality control method and video quality control system|
JP5043856B2|2005-12-05|2012-10-10|ブリティッシュ・テレコミュニケーションズ・パブリック・リミテッド・カンパニー|Video quality measurement|
US9544602B2|2005-12-30|2017-01-10|Sharp Laboratories Of America, Inc.|Wireless video transmission system|
US8154602B2|2006-05-09|2012-04-10|Nippon Telegraph And Telephone Corporation|Video quality estimation apparatus, method, and program|
JP4451857B2|2006-05-09|2010-04-14|日本電信電話株式会社|Video quality parameter estimation apparatus, method, and program|
KR100933509B1|2006-05-09|2009-12-23|니폰덴신뎅와 가부시키가이샤|Computer-readable recording media recording video quality estimation apparatus, methods and programs|
JP5180294B2|2007-06-19|2013-04-10|ヴァントリックスコーポレーション|Buffer-based rate control that utilizes frame complexity, buffer level, and intra-frame location in video encoding|
EP2213000B1|2007-07-16|2014-04-02|Telchemy, Incorporated|Method and system for content estimation of packet video streams|
CA2668003C|2007-08-22|2013-04-02|Nippon Telegraph And Telephone Corporation|Video quality estimation device, video quality estimation method, frame type judgment method, and recording medium|
JP5172440B2|2008-01-08|2013-03-27|日本電信電話株式会社|Video quality estimation apparatus, method and program|
JP2009260940A|2008-03-21|2009-11-05|Nippon Telegr & Teleph Corp <Ntt>|Method, device, and program for objectively evaluating video quality|
WO2010042650A2|2008-10-07|2010-04-15|Motorola, Inc.|System and method of optimized bit extraction for scalable video coding|US9232216B2|2010-12-10|2016-01-05|Deutsche Telekom Ag|Method and apparatus for assessing the quality of a video signal during encoding and transmission of the video signal|
CN102651821B|2011-02-28|2014-07-30|华为技术有限公司|Method and device for evaluating quality of video|
JP5384586B2|2011-08-31|2014-01-08|日本電信電話株式会社|Video frame discriminating apparatus, method and program using TimestampedTS , and video quality estimating apparatus, method and program using video frame discriminated using TTS|
EP2745518B1|2011-09-26|2017-06-14|Telefonaktiebolaget LM Ericsson |Estimating user-perceived quality of an encoded video stream|
US9203708B2|2011-09-26|2015-12-01|Telefonaktiebolaget L M Ericsson |Estimating user-perceived quality of an encoded stream|
CN103634594B|2012-08-21|2015-04-29|华为技术有限公司|Method and apparatus for obtaining video coding compression quality|
US9858656B2|2013-03-13|2018-01-02|Raytheon Company|Video interpretability and quality estimation|
JP6061778B2|2013-05-17|2017-01-18|日本電信電話株式会社|Video quality evaluation apparatus, video quality evaluation method and program|
US9674515B2|2013-07-11|2017-06-06|Cisco Technology, Inc.|Endpoint information for network VQM|
GB2529446A|2014-07-17|2016-02-24|British Academy Of Film And Television Arts The|Measurement of video quality|
US20160142746A1|2014-11-14|2016-05-19|Thales Avionics, Inc.|Method of encrypting, streaming, and displaying video content using selective encryption|
CN105791833B|2014-12-17|2018-09-04|深圳Tcl数字技术有限公司|Select the method and device of coding and decoding video hardware platform|
US9538137B2|2015-04-09|2017-01-03|Microsoft Technology Licensing, Llc|Mitigating loss in inter-operability scenarios for digital video|
JP6215898B2|2015-11-16|2017-10-18|株式会社Pfu|Video processing apparatus, video processing system, and video processing method|
EP3491784B1|2016-08-01|2020-05-13|Telefonaktiebolaget LM Ericsson |Estimation of losses in a video stream|
US10721475B2|2017-09-01|2020-07-21|Ittiam SystemsLtd.|K-nearest neighbor model-based content adaptive encoding parameters determination|
EP3573338A1|2018-05-25|2019-11-27|Carrier Corporation|Video device and network quality evaluation/diagnostic tool|
法律状态:
2018-03-27| B15K| Others concerning applications: alteration of classification|Ipc: H04N 17/00 (2006.01), G06T 7/00 (2017.01) |
2019-01-15| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]|
2019-12-31| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]|
2021-02-17| B06A| Patent application procedure suspended [chapter 6.1 patent gazette]|
2021-06-01| B09A| Decision: intention to grant [chapter 9.1 patent gazette]|
2021-08-10| B09W| Correction of the decision to grant [chapter 9.1.4 patent gazette]|Free format text: PETICAO DE DESENHOS INDICADA ESTAVA INCORRETA. |
2021-09-08| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 25/03/2010, OBSERVADAS AS CONDICOES LEGAIS. PATENTE CONCEDIDA CONFORME ADI 5.529/DF, QUE DETERMINA A ALTERACAO DO PRAZO DE CONCESSAO. |
优先权:
申请号 | 申请日 | 专利标题
JP2009243236|2009-10-22|
JP2009-243236|2009-10-22|
JP2009274238|2009-12-02|
JP2009-274238|2009-12-02|
PCT/JP2010/055203|WO2011048829A1|2009-10-22|2010-03-25|Video quality estimation device, video quality estimation method, and video quality estimation program|
[返回顶部]