![]() METHOD FOR QUANTIZING, METHOD FOR DECODING, METHOD FOR ENCODING, AND LEGIBLE RECORDING MEDIA BY NON-
专利摘要:
method for quantizing, method for decoding, method for encoding, and non-transient computer-readable recording medium a method for quantizing is provided that includes quantizing an input signal by selecting one from a first quantization scheme not using interframe and second prediction quantization scheme using interframe prediction in consideration of one or more of a prediction mode, a predictive error and a transmission channel state. 公开号:BR112013027093B1 申请号:R112013027093-4 申请日:2012-04-23 公开日:2021-04-13 发明作者:Ho-Sang Sung;Eun-mi Oh 申请人:Samsung Electronics Co., Ltd; IPC主号:
专利说明:
Technical Field Methods and devices consistent with the present disclosure refer to the quantization and dequantization of linear predictive coding coefficients, and more specifically, to a method to efficiently quantize the low complexity linear predictive coding coefficients, a sound coding method employing the method of quantization, a method to dequantize the linear predictive coding coefficients, a method of decoding sound using the dequantization method, and a recording medium for them. Fundamentals of Technique In systems for encoding a sound, such as voice or audio, Linear Predictive Coding (LPC) coefficients are used to represent a short-term frequency characteristic of the sound. The LPC coefficients are obtained in a pattern of dividing an input sound into frame units and minimizing the energy of a predictive error per frame. However, as the LPC coefficients have a wide dynamic range and a characteristic of a filter LPC used is very sensitive to quantization errors of LPC coefficients, the stability of the LPC filter is not guaranteed. Thus, quantization is performed by converting the LPC coefficients into other coefficients that are easy to check the stability of a filter, which is advantageous for interpolation, and has a good quantization characteristic. It is mainly preferred that the quantization is carried out by converting the LPC coefficients into Coefficients of Spectral Line Frequency (LSF) or Spectral Frequency of Immittanciometry (ISF). In particular, a method of quantizing the LPC coefficients can increase a quantization gain by using a high interframe correlation of the LSF coefficients in a frequency domain and in a time domain. The LSF coefficients indicate a frequency characteristic of a short-term sound, and for frames in which a frequency characteristic of an input sound is rapidly changed, the LSF coefficients of the frame are also rapidly changed. However, for a quantizer using the high interframe correlation of the LSF coefficient, as the prediction itself cannot be performed for frames that have changed rapidly, the quantizer's quantization performance decreases. Disclosure of the Invention Technical problem One aspect is to provide a method to efficiently quantify the Linear Predictive Coding (LPC) coefficients with low complexity, a sound coding method employing the quantization method, a method to de-quantify the LPC coefficients, a sound decoding method employing the quantification method, and a recording medium for them. Solution to the Problem According to one aspect of one or more exemplary modalities, a method for quantizing is provided comprising quantizing an input signal by selecting one of a first quantization scheme not using an interframe prediction and a second quantization scheme using interframe prediction, taking into account at least one of a prediction mode, a predictive error and a transmission channel state. According to another aspect of one or more exemplary embodiments, an encoding method is provided comprising determining a method of encoding an input signal; quantize the input signal by selecting one of a first quantization scheme not using an interframe prediction and a second quantization scheme using interframe prediction, according to path information determined in consideration of at least one of a prediction mode, a predictive error and a transmission channel state; encode the quantized input signal in the encoding mode; and generating a bit stream including one of a quantized result in the first quantization scheme and a quantized result in the second quantization scheme, the encoding mode of the input signal, and path information related to the quantization of the input signal. In accordance with another aspect of one or more exemplary modalities, a quantization method is provided comprising dequantizing an input signal by selecting one of a first dequantization scheme not using an interframe prediction and a second dequantization scheme using interframe prediction, based on the path information included in a bit stream, the path information is determined in consideration of at least one of a prediction mode, a predictive error and a transmission channel state at an encoding end. According to another aspect of one or more exemplary embodiments, a decoding method is provided comprising decoding the Linear Predictive Encoding (LPC) parameters and an encoding mode included in a bit stream; dequantize the decoded LPC parameters using one of a first dequantization scheme not using interframe prediction and a second dequantization scheme using interframe prediction based on path information included in the bit stream; and decoding the quantized LPC parameters in the decoded encoding mode, where the path information is determined in consideration of at least one of a prediction mode, a predictive error and a transmission channel state at an encoding end. According to another aspect of one or more exemplary embodiments, a method of determining a type of quantizer is provided, the method comprising comparing a bit rate of an input signal with a first reference value; comparing a bandwidth of the input signal with a second reference value; compare an internal sampling frequency with a third reference value; and determining the quantizer type for the input signal as one of an open loop type and a closed loop type based on the results of one or more of the comparisons. According to another aspect of one or more exemplary embodiments, an electronic device is provided including a communication unit that receives at least one of a sound signal and an encoded bit stream, or that transmits at least one of a sound signal encoded in a restored signal; and a coding module that quantizes the sound signal received by selecting one from a first quantization scheme not using an interframe prediction and a second quantization scheme using interframe prediction, according to the path information determined in consideration of at least minus one of a prediction mode, a predictive error and a transmission channel state and encode the quantized sound signal in an encoding mode. According to another aspect of one or more exemplary embodiments, an electronic device is provided that includes a communication unit that receives at least one of a sound signal and an encoded bit stream, or that transmits at least one of a signal. encoded sound and a restored sound; and a decoding module that decodes the Linear Predictive Coding (LPC) parameters and an encoding mode included in the bit stream, dequantizes the decoded LPC parameters using one of a first quantization scheme without using interframe prediction and a second scheme dequantização using interframe prediction based on the path information included in the bit stream, and decodes the quantized LPC parameters in the decoded encoding mode, where the path information is determined in consideration of at least one of a prediction mode, a predictive error and a transmission channel state at an encoding end. According to another aspect of one or more exemplary embodiments, an electronic device is provided including a communication unit that receives at least one of a sound signal and an encoded bit stream, or that transmits at least one of a sound signal encoded and a restored sound; a coding module that quantizes the sound signal received by selecting one of a first quantization scheme not using an interframe prediction and a second quantization scheme using interframe prediction, according to the path information determined taking into account at least one of a prediction mode, a predictive error and a transmission channel state and which encodes the quantized sound signal in an encoding mode; and a decoding module that decodes the Linear Predictive Coding (LPC) parameters and an encoding mode included in the bit stream, dequantizes the decoded LPC parameters using one of a first quantization scheme without using interframe prediction and a second scheme dequantização using interframe prediction based on the path information included in the bit stream, and decodes the quantized LPC parameters in the decoded encoding mode. Advantageous Effects of the Invention In accordance with the present inventive concept, to efficiently quantize an audio or speech signal, by applying a plurality of coding modes according to the characteristics of the audio or speech signal and allocating various numbers of bits to the audio signal or it speaks according to a compression ratio applied to each of the coding modes, an optimal quantizer with low complexity can be selected in each of the coding modes. Brief Description of Drawings The above and other aspects will become more evident through a detailed description of its exemplary modalities with reference to the accompanying drawings in which: Figure 1 is a block diagram of a sound coding device according to an exemplary modality; Figures 2A to 2D are examples of various coding modes that can be selected by a coding mode selector of the sound coding apparatus of Figure 1; Figure 3 is a block diagram of a Linear Predictive Coding (LPC) quantizer according to an exemplary modality; Figure 4 is a block diagram of a weighting function determiner according to an exemplary embodiment; Figure 5 is a block diagram of an LPC coefficient quantizer according to another exemplary embodiment; Figure 6 is a block diagram of a quantization path selector according to an exemplary embodiment; Figures 7A and 7B are flowcharts illustrating operations of the quantization path selector of Figure 6, according to an exemplary embodiment; Figure 8 is a block diagram of a quantization path selector according to another exemplary embodiment; Figure 9 illustrates information regarding a channel state that can be transmitted at a network end when a codec service is provided; Figure 10 is a block diagram of an exemplary modality coefficient quantizer; LPC according to another Figure 11 is a block diagram of an exemplary modality coefficient quantizer; LPC according to another Figure 12 is a block diagram of an exemplary modality coefficient quantizer; LPC according to another Figure 13 is a block diagram of an exemplary modality coefficient quantizer; LPC according to another Figure 14 is a block diagram of an exemplary modality coefficient quantizer; LPC according to another Figure 15 is a block diagram of an LPC coefficient quantizer according to another exemplary embodiment; Figures 16A and 16B are block diagrams of LPC coefficient quantizers according to other exemplary modalities; Figures 17A to 17C are block diagrams of LPC coefficient quantizers according to other exemplary embodiments; Figure 18 is a block diagram of an LPC coefficient quantizer according to another exemplary embodiment; Figure 19 is a block diagram of an LPC coefficient quantizer according to another exemplary embodiment; Figure 20 is a block diagram of an LPC coefficient quantizer according to another exemplary embodiment; Figure 21 is a block diagram of a quantizer type selector according to an exemplary embodiment. Figure 22 is a flow chart illustrating an operation of a quantizer type selection method, according to an exemplary modality; Figure 23 is a block diagram of a sound decoding apparatus according to an exemplary embodiment; Figure 24 is a block diagram of an LPC coefficient quantizer according to an exemplary embodiment; Figure 25 is a block diagram of an LPC coefficient quantizer according to another exemplary embodiment; Figure 26 is a block diagram of an example of a first quantization scheme and a second quantization scheme in the LPC coefficient quantizer of Figure 25, according to an exemplary embodiment; Figure 27 is a flow chart illustrating a quantization method according to an exemplary embodiment; Figure 28 is a flow chart illustrating a quantification method according to an exemplary modality; Figure 29 is a block diagram of an electronic device including a coding module according to an embodiment. Figure 30 is an example; block diagram of an electronic device including a decoding module, according to an exemplary modality; and Figure 31 is a block diagram of an electronic device including an encoding module and a decoding module, according to an exemplary embodiment. Mode for the Invention The present inventive concept can allow for various types of change or modification and various changes in shape, and specific exemplary modalities will be illustrated in the drawings and described in detail in the specification. However, it should be understood that the specific exemplary modalities do not limit the present inventive concept to a specific form of disclosure, but include each modified concept, equivalent or replaced by a concept within the essence and technical scope of the present inventive concept. In the following description, well-known functions or constructions are not described in detail since they would obscure the invention in unnecessary detail. Although the terms, such as "first" and "second" can be used to describe various elements, the elements cannot be limited by the terms. The terms can be used to distinguish a particular element from another element. The terminology used in the application is used only to describe specific exemplary modalities and is not intended to limit the inventive concept. Although general terms, as currently widely used as possible are selected as terms used in this inventive concept while considering the functions in this inventive concept, they may vary according to an intention of those of ordinary skill in the art, judicial precedents, or the emergence of new technology . In addition, in specific cases, the terms intentionally selected by the applicant can be used, in which case the meaning of the terms will be revealed in the corresponding description. Consequently, the terms used in this inventive concept must be defined not by the simple names of the terms, but by the meaning of the terms and the content in relation to the present inventive concept. A singular expression includes a plural expression unless they are clearly different in context from one another. In the order, it should be understood that terms, such as "include" and "have", are used to indicate the existence of an implemented characteristic, number, stage, operation, element, part, or a combination thereof without excluding the possibility of existence or addition of one or more of other characteristics, numbers, stages, operations, elements, parts, or combinations thereof. The present inventive concept will now be described more fully with reference to the accompanying drawings, in which exemplary embodiments of the present invention are shown. Similar reference numerals in the drawings denote similar elements and, therefore, their repetitive description will be omitted. Expressions such as "at least one of", when preceding a list of elements, modify the entire list of elements and do not modify the individual elements of the list. Figure 1 is a block diagram of a sound coding apparatus 100 according to an exemplary embodiment. The sound coding apparatus 100 shown in Figure 1 may include a preprocessor (for example, a central processing unit (CPU)) 111, a spectrum analyzer and Linear Prediction (LP) 113, a mode selector. encoding 115, a Linear Predictive Coding (LPC) coefficient quantizer 117, a variable mode encoder 119, and a parameter encoder 121. Each component of the sound coding apparatus 100 may be implemented by at least one processor ( for example, a central processing unit (CPU) because it is integrated in at least one module. It should be noted that a sound can indicate audio, speech or a combination of them. The following description will refer to the sound as it speaks for convenience of description. However, it will be understood that any sound can be processed. Referring to Figure 1, preprocessor 111 can preprocess an input speech signal. In the pre-processing process, an unwanted frequency component can be removed from the speech signal, or a frequency characteristic of the speech signal can be adjusted to be advantageous for encoding. In detail, the preprocessor 111 can perform high pass filtration, pre-emphasis or sample conversion. The LP and spectrum analyzer 113 can extract LPC coefficients by analyzing characteristics in a frequency domain or perform LP analysis on the pre-processed speech signal. Although an LP analysis per frame is generally performed, two or more LP analyzes per frame can be performed to improve additional sound quality. In this case, an LP analysis is an LP for one frame end, which is performed like a conventional LP analysis, and the others can be LP for intermediate subframes to improve sound quality. In this case, a frame end of a current frame indicates a final subframe between the subframes forming the current frame, and a frame end of a previous frame indicates a final subframe among the subframes forming the previous frame. For example, a frame can consist of four subframes. The intermediate subframes indicate one or more subframes among existing subframes between the final subframe, which is the frame end of the previous frame, and the final subframe, which is the frame end of the current frame. Consequently, the spectrum analyzer and LP 113 can extract a total of two or more sets of LPC coefficients. The LPC coefficients can use an order of 10 when an input signal is a narrow band and can use an order of 16 to 20 when the input signal is a broadband. However, the size of the LPC coefficients is not limited to that. The encoding mode selector 115 can select one of a plurality of encoding modes in correspondence with multiple rates. In addition, the coding mode selector 115 can select one of the plurality of coding modes using speech signal characteristics, which are obtained from band information, height information, or frequency domain analysis information. . In addition, the coding mode selector 115 can select one of the plurality of coding modes using multiple rates and the characteristics of the speech signal. The LPC coefficient quantizer 117 can quantize the LPC coefficients extracted by the spectrum analyzer and LP 113. The LPC coefficient quantizer 117 can perform quantization by converting the LPC coefficients to other coefficients suitable for quantization. The LPC coefficient quantizer 117 can select one of a plurality of paths including a first path not using interframe prediction and a second path using interframe prediction as a speech signal quantization path based on a first criterion prior to signal quantization. of speech and quantize the speech signal using one of a first quantization scheme and a second quantization scheme according to the selected quantization path. Alternatively, the LPC coefficient quantizer 117 can quantize the LPC coefficients for both, first path through the first quantization scheme not using interframe prediction, and the second path through the second quantization scheme using interframe prediction and select a result quantization of one of the first path and the second path based on a second criterion. The first criterion and the second criterion can be identical to each other or different from each other. The variable mode encoder 119 can generate a bit stream by encoding the quantized LPC coefficients by the LPC coefficient quantizer 117. The variable mode encoder 119 can encode the quantized LPC coefficients in the encoding mode selected by the encoding mode selector 115. The variable mode encoder 119 can encode an excitation signal from the LPC coefficients in units of frames or subframes. An example of coding algorithms used in the variable mode encoder 119 may be Co-excited Linear Prediction (CELP) or algebraic CELP (ACELP). A transform encoding algorithm can be used additionally according to an encoding mode. Representative parameters for coding the coefficients LPCs in the CELP algorithm are an adaptive codebook index, an adaptive codebook gain, a fixed codebook index and a fixed codebook gain. The current frame encoded by the variable mode encoder 119 can be stored to encode a subsequent frame. The parameter encoder 121 can encode the parameters to be used by a decoding end so that the decoding is included in a bit stream. It is advantageous if the parameters corresponding to the encoding mode are encoded. The bit stream generated by parameter encoder 121 can be stored or transmitted. Figures 2A to 2D are examples of various coding modes that can be selected by the coding mode selector 115 of the sound coding apparatus 100 of Figure 1. Figures 2A and 2C are examples of coding modes classified in a case where the number of bits allocated for quantization is large, that is, a case of a high bit rate, and Figures 2B and 2D are examples of coding modes classified in a case where the number of bits allocated for quantization is small, that is that is, a case of a low bit rate. First, in the case of a high bit rate, the speech signal can be classified into a Generic Encoding (GC) mode and an Encoding mode of Transition (TC) to a simple structure; as shown in Figure 2A. In that case, the GC mode includes a Voice Coding mode (UC) and a Voice Coding mode (VC). In the case of a high bit rate, an Inactive Encoding (IC) mode and an Audio Encoding (AC) mode can be additionally included, as shown in Figure 2C. In addition, in the case of a low bit rate, the speech signal can be classified in GC mode, UC mode, VC mode and TC mode, as shown in Figure 2B. In addition, in the case of a low bit rate, the IC mode and the AC mode can be additionally included, as shown in Figure 2D. In Figures 2A and 2C, the UC mode can be selected when the speech signal is a sound or noise without a voice having characteristics similar to the sound without a voice. VC mode can be selected when the speech signal is a spoken sound. TC mode can be used to encode a signal from a transition interval in which characteristics of the speech signal are changed quickly. The GC mode can be used to encode other signals. UC mode, VC mode, TC mode and GC mode are based on a definition and classification criterion disclosed in ITU-T G.718, but are not limited to this. In Figures 2B and 2D, the IC mode can be selected for a mute sound, and the AC mode can be selected when speech signal characteristics are approximated for audio. The coding modes can be further classified according to the bands of the speech signal. The bands of the speech signal can be classified, for example, into Narrowband (NB), a Broadband (WB), a Super Wide Band (SWB) and a Complete Band (FB). The NB can have a bandwidth of approximately 300 Hz to approximately 3,400 Hz or from approximately 50 Hz to approximately 4,000 Hz, the WB may have a bandwidth of approximately 50 Hz to approximately 7,000 Hz or from approximately 50 Hz to approximately 8,000 Hz, the SWB can have a bandwidth of approximately 50 Hz to approximately 14,000 Hz or from approximately 50 Hz to approximately 16,000 Hz, and the FB may have a bandwidth of approximately 20,000 Hz. Here, the numerical values related to Bandwidths are defined for convenience and are not limited to those. In addition, the classification of the bands can be defined more simply or with more complexity than the description above. The variable mode encoder 119 of Figure 1 can encode the LPC coefficients using different encoding algorithms corresponding to the encoding modes shown in Figures 2A to 2D. When the types of coding modes and the number of coding modes are determined, a codebook may need to be re-trained using speech signals corresponding to the determined coding modes. Table 1 shows an example of schemes and quantization structures in a case of 4 coding modes. Here, a quantization method not using interframe prediction can be called a free security scheme, and the quantization method using interframe prediction can be called a predictive scheme. In addition, VQ denotes a vector quantizer, and BC-TCQ denotes a Block-Constrained Trellis-Encoded quantizer. Table 1 The encoding modes can be changed according to an applied bit rate. As described above, to quantize LPC coefficients at a high bit rate using two encoding modes, 40 or 41 bits per frame can be used in GC mode, and 46 bits per frame can be used in TC mode. Figure 3 is a block diagram of an LPC 300 coefficient quantizer according to an exemplary embodiment. The LPC coefficient quantizer 300 shown in Figure 3 can include a first coefficient converter 311, a weighting function determiner 313, an immittance spectral frequency (ISF) quantizer / line spectral frequency (LSF) 315, and a second coefficient converter 317. Each component of the LPC coefficient quantizer 300 can be implemented by at least one processor (for example, central processing unit (CPU)) because it is integrated in at least one module. Referring to Figure 3, the first coefficient converter 311 can convert the extracted LPC coefficients by performing LP analysis at one frame end of a current or previous frame of a speech signal to the coefficients in another format. For example, the first coefficient converter 311 can convert the LPC coefficients of the frame end of a current or previous frame into any format of LSF coefficients and ISF coefficients. In that case, the ISF coefficients or the LSF coefficients indicate an example of formats in which the LPC coefficients can be easily quantized. The weighting function determiner 313 can determine a weighting function related to the importance of the LPC coefficients with respect to the frame end of the current frame and the frame end of the previous frame using the ISF coefficients or the LSF coefficients converted from the coefficients LPC. The given weighting function can be used in a process of selecting a quantization path or searching for a codebook index by which weighting errors are minimized in quantization. For example, the weighting function determiner 313 can determine a weighting function by magnitude and a weighting function by frequency. In addition, the weighting function determiner 313 can determine a weighting function by considering at least one of a frequency band, an encoding mode, and spectrum analysis information. For example, the weighting function determiner 313 can derive an optimal weighting function by coding mode. In addition, the weighting function determiner 313 can derive an optimal weighting function by frequency band. In addition, the weighting function determiner 313 can derive an optimal weighting function based on the frequency analysis information of the speech signal. The frequency analysis information may include spectrum skew information. The weighting function determiner 313 will be described in more detail below. The ISF / LSF 315 quantizer can quantize the ISF coefficients or the converted LSF coefficients from the LPC coefficients of the frame end of the current frame. The ISF / LSF 315 quantizer can achieve an optimal quantization index in an input quantization mode. The ISF / LSF 315 quantizer can quantize the ISF coefficients or the LSF coefficients using the weighting function determined by the weighting function determiner 313. The ISF / LSF 315 quantizer can quantize the ISF coefficients or the LSF coefficients by selecting one of a plurality of quantization paths in the use of the weighting function determined by the weighting function determiner 313. As a result of quantization, a quantization index of the ISF coefficients or the LSF coefficients and Quantized ISF coefficients (QISF) or Quantized LSF coefficients (QLSF ) with respect to the frame end of the current frame, can be obtained. The second coefficient converter 317 can convert the QISF or QLSF coefficients to coefficients Quantized LPC (QLPC). A relationship between vector quantization of LPC coefficients and a weighting function will now be described. The vector quantizer indicates a process of selecting a codebook index having the minimum error using a squared error distance measure, considering that all inputs in a vector are of equal importance. However, as the importance is different in each of the LPC coefficients, if errors of important coefficients are reduced, a perceptual quality of a final synthesized signal may increase. Thus, when the LSF coefficients are quantized, decoding devices can increase the performance of a synthesized signal by applying a weighting function representing the importance of each of the LSF coefficients for measuring the squared error distance and selecting an optimal index code book. According to an exemplary modality, a magnitude weighting function can be determined on the basis that each of the ISF or LSF coefficients currently affects a spectral envelope using frequency information and effective spectral magnitudes of the ISF or LSF coefficients. According to an exemplary modality, additional quantization efficiency can be obtained by combining the magnitude weighting function and a frequency weighting function considering perceptual characteristics and a distribution of sound wave group in the frequency domain. According to an exemplary modality, as the effective magnitude of the frequency domain is used, information from all frequencies can be well reflected, and a weight from each of the ISF or LSF coefficients can be correctly derived. According to an exemplary modality, when vector quantization of ISF or LSF coefficients converted from LPC coefficients is performed, if the importance of each coefficient is different, a weighting function indicating which entry is relatively more important in a vector can be determined. In addition, a weighting function capable of weighting a portion of high energy by analyzing a spectrum of a frame to be coded, can be determined to improve the coding accuracy. High spectral energy indicates a high correlation in the time domain. An example of applying such a weighting function to an error function is described. First, if the variation of an input signal is high, when quantization is performed without using interframe prediction, an error function to search for a codebook index using QISF coefficients can be represented by Equation 1 below. Otherwise, if the variation of the input signal is low, when quantization is performed using interframe prediction, an error function to search for a codebook index using the QISF coefficients can be represented by Equation 2. A book index code indicates a value to minimize a corresponding error function. Here, w (i) denotes a weighting function z (i) and r (i) denotes inputs from a quantizer, z (i) denotes a vector in which an average value is removed from ISF (i) in Figure 3, er ( i) denotes a vector in which an interframe predictive value is removed from z (i). Ewerr (k) can be used to search for a codebook in the event that an interframe prediction is not performed and Ewerr (p) can be used to search for a codebook in the event that an interframe prediction is performed. In addition, c (i) denotes a codebook, and p denotes an order of ISF coefficients, which is usually 10 in the NB and 16 to 20 in the WB. According to an exemplary modality, coding devices can determine an optimal weighting function by combining a magnitude weighting function in the use of spectral magnitudes corresponding to the frequencies of the ISF or LSF coefficients converted from the LPC coefficients and a weighting function by frequency in consideration of the perceptual characteristics and a group distribution of sound waves of an input signal. Figure 4 is a block diagram of a weighting function determiner 400 according to an exemplary embodiment. The weighting function determiner 400 is shown together with a window processor 421, a frequency mapping unit 232, and a magnitude calculator 425 from a spectrum analyzer and LP 410. Referring to Figure 4, the window processor 421 can apply a window to an input signal. The window can be a rectangular window, a Hamming window, or a sine window. The frequency mapping unit 423 can map the input signal in the time domain to an input signal in the frequency domain. For example, the frequency mapping unit 423 can transform the input signal to the frequency domain through a Fast Fourier Transform (FFT) or a Modified Discrete Cosine Transform (MDCT). The magnitude calculator 425 can calculate the magnitudes of the frequency spectrum containers with respect to the transformed input signal for the frequency domain. The number of frequency spectrum containers can be the same as the number to normalize the ISF or LSF coefficients using the weighting function determiner 400. Spectrum analysis information can be input to the weighting function determinator 400 as a result performed by the spectrum analyzer and LP 410. In that case, the spectrum analysis information can include a spectrum slope. The weighting function determinator 400 can normalize the ISF or LSF coefficients converted from the LPC coefficients. A range to which normalization is effectively applied within the ISF coefficients of order pth is in the range of 0 ° to (p-2) °. Normally, the order of 0o to (p-2) ° exists between 0 and π. The weighting function determinator 400 can perform normalization with the same number K as the number of frequency spectrum containers, which is derived by the frequency mapping unit 423 to use spectrum analysis information. The weighting function determiner 400 can determine a weighting function by magnitude W1 (n) in which the ISF or LSF coefficients affect a spectral envelope for an average subframe using spectrum analysis information. For example, the weighting function determinator 400 can determine the weighting function by magnitude W1 (n) using frequency information from the ISF or LSF coefficients and effective spectral magnitudes of the input signal. The magnitude weighting function W1 (n) can be determined for the ISF or LSF coefficients converted from the LPC coefficients. The weighting function determiner 400 can determine the magnitude weighting function W1 (n) using a magnitude of a frequency spectrum container corresponding to each of the ISF or LSF coefficients. The weighting function determiner 400 can determine the magnitude weighting function W1 (n) using the magnitudes of a spectrum container corresponding to each of the ISF or LSF coefficients and at least one adjacent spectrum container located around the container spectrum. In that case, the weighting function determinator 400 can determine the magnitude weighting function W1 (n) related to a spectral envelope by extracting a representative value from each spectrum container and at least one adjacent spectrum container. An example of the representative value is a maximum value, an average value or an intermediate value of a spectrum container corresponding to each of the ISF or LSF coefficients and at least one adjacent container and spectrum. The weighting function determiner 400 can determine a frequency weighting function W2 (n) using frequency information from the ISF or LSF coefficients. In detail, the weighting function determiner 400 can determine the frequency weighting function W2 (n) using perceptual characteristics and a group distribution of the incoming signal's sound waves. In that case, the weighting function determinator 400 can extract the perceptual characteristics of the input signal according to a high-pitched scale. Then, the weighting function determinator 400 can determine the frequency weighting function W2 (n) based on a first sound wave group of the sound wave group distribution. The frequency weighting function W2 (n) can result in a relatively low weight at a super low frequency and at a high frequency and result in a constant weight in a frequency range of a low frequency, for example, an interval corresponding to the first group of sound waves. The weighting function determiner 400 can determine a final weighting function W (n) by combining the magnitude weighting function W1 (n) and the frequency weighting function W2 (n). In that case, the weighting function determiner 400 can determine the final weighting function W (n) by multiplying or adding the magnitude weighting function W1 (n) using, or for the frequency weighting function W2 (n ). As another example, the weighting function determinator 400 can determine the weighting function of magnitude W1 (n) and the frequency weighting function W2 (n) by considering an encoding mode and frequency band information of the signal. Entrance. To do this, the weighting function determinator 400 can check the input signal encoding modes for a case where an input signal bandwidth is an NB and a case where the input signal bandwidth is an WB by checking the bandwidth of the input signal. When the input signal encoding mode is UC mode, the weighting function determiner 400 can determine and combine the magnitude weighting function W1 (n) and the frequency weighting function W2 (n) in UC mode. When the input signal encoding mode is not UC mode, the weighting function determiner 400 can determine and combine the magnitude weighting function W1 (n) and the frequency weighting function W2 (n) in VC mode . If the encoding mode of the input signal is GC mode or TC mode, the weighting function determinator 400 can determine a weighting function through the same process as in VC mode. For example, when the input signal is transformed into frequency by the FFT algorithm, the magnitude weighting function W1 (n) using spectral magnitudes of the FFT coefficients can be determined by Equation 3 below. For example, the frequency weighting function W2 (n) in VC mode can be determined by Equation 4, and the frequency weighting function W2 (n) in UC mode can be determined by Equation 5. Constants in Equations 4 and 5 can be changed according to the characteristics of the 15 input signal: The weighting function finally derived W (n) can be determined by Equation 6. Figure 5 is a block diagram of an LPC coefficient quantizer according to an exemplary embodiment. Referring to Figure 5, the LPC coefficient quantizer 500 may include a weighting function determiner 511, a quantization path determiner 513, a first quantization scheme 515, and a second quantization scheme 517. As the function determiner weighting 511 has been described in Figure 4, a description of which is omitted here. The quantization path determiner 513 can determine that one of several paths, including a first path not using interframe prediction and a second path using interframe prediction, is selected as a quantization path for an input signal, based on a criterion before quantizing the input signal. The first quantization scheme 515 can quantize the input signal provided from the quantization path determiner 513, when the first path is selected as the quantization path of the input signal. The first quantization scheme 515 can include a first quantizer (not shown) to approximately quantize the input signal and the second quantizer (not shown) to precisely quantize a quantization error signal between the input signal and an output signal of the first quantizer. The second quantization scheme 517 can quantize the input signal provided from the quantization path determiner 513, when the second path is selected as the quantization path of the input signal. The first quantization scheme 515 can include an element to perform restricted restricted trellis-coded quantization on a predictive error of the input signal and an interframe predictive value and an interframe prediction element. The first quantization scheme 515 is a quantization scheme not using interframe prediction and can be called a net security scheme. The second quantization scheme 517 is a quantization scheme using interframe prediction and can be called a predictive scheme. The first quantization scheme 515 and the second quantization scheme 517 are not limited to the current exemplary modality and can be implemented using the first and second quantization schemes according to the various exemplary modalities described below, respectively. Consequently, according to a low bit rate for a high-efficiency interactive voice service for a high bit rate to provide a differentiated quality service, an optimal quantizer can be selected. Figure 6 is a block diagram of a quantization path determiner according to an exemplary embodiment. Referring to Figure 6, the quantization path determiner 600 may include a predictive error calculator 611 and a quantization scheme selector 613. The predictive error calculator 611 can calculate a predictive error in various methods to receive an interframe predictive value p (n), a weighting function w (n), and an LSF coefficient z (n) from which a value of Direct current (DC) is removed. First, an interframe predictor (not shown) that is the same as that used in a second quantization scheme, that is, the predictive scheme, can be used. Here, any of an Auto-Regressive (AR) method and a Moving Average (MA) method can be used. A z (n) signal from a previous frame for interframe prediction can use either a quantized value or a non-quantized value. In addition, a predictive error can be obtained by using the weighting function w (n) or not. Consequently, the total number of combinations is 8, 4 of which are as follows: First, a weighted AR predictive error using a quantized signal from an earlier table can be represented by Equation 7. Second, an AR predictive error using the quantized signal from the previous table can be represented by Equation 8. Thirdly, a weighted AR predictive error using the z (n) sign in the previous table can be represented by Equation 9. Fourth, an AR predictive error using the z (n) sign in the previous table can be represented by Equation 10. In Equations 7 to 10, M denotes an order of LSF coefficients and M is normally 16 when a bandwidth of an input speech signal is a WB, and denotes a predictive coefficient of the AR method. As described above, information regarding the immediately preceding table is generally used, and a quantization scheme can be determined using a predictive error obtained from the description above. In addition, for a case where information regarding a previous table does not exist due to frame errors in the previous table, a second predictive error can be obtained by using a table immediately before the previous table, and a quantization scheme can be obtained. be determined using the second predictive error. In this case, the second predictive error can be represented by Equation 11 below in comparison with Equation 7. The quantization scheme selector 613 determines a quantization scheme for a current frame using at least one the predictive error obtained by the predictive error calculator 611 and the coding mode obtained by the coding mode determiner (115 of Figure 1) . Figure 7A is a flow chart illustrating an operation of the quantization path determiner in Figure 6, according to an exemplary embodiment. As an example, 0, 1, and 2 can be used as a prediction mode. In a prediction mode 0, only a net security scheme can be used and in a prediction model, only a predictive scheme can be used. In a prediction mode 2, the net security scheme and the predictive scheme can be switched. A signal to be encoded in the prediction mode 0 has a non-stationary characteristic. A non-stationary signal has a wide variation between neighboring frames. Therefore, if an interframe prediction is performed on the non-stationary signal, a prediction error can be greater than an original signal, which results in deterioration in the performance of a quantizer. A signal to be coded in prediction mode 1 has a stationary characteristic. As a stationary signal has a small variation between neighboring frames, its interframe correlation is high. Optimal performance can be obtained by performing a signal quantization prediction mode 2 in which a non-stationary characteristic and a stationary characteristic are mixed. Although a signal has a non-stationary characteristic and a stationary characteristic, a prediction mode 0 or a prediction mode 1 can be established, based on a mixing ratio. However, the mixing ratio to be established in a prediction mode 2 can be defined in advance as an optimal value experimentally or through simulations. With reference to Figure 7A, in operation 711, it is determined whether a prediction mode of a current frame is 0, that is, whether a speech signal from the current frame has a non-stationary characteristic. As a result of the determination in operation 711, if the prediction mode is 0, for example, when the variation of the speech signal of the current frame is large as in the TC mode or in the UC mode, since the interframe prediction is difficult, the net security scheme, that is, the first quantization scheme, can be determined as a quantization path in operation 714. As a result of the determination in operation 711, if the prediction mode is not 0, it is determined in operation 712 whether the prediction mode is 1, that is, whether a speech signal from the current frame has a stationary characteristic. As a result of the determination in operation 712, if the prediction mode is 1, as the interframe prediction performance is excellent, the predictive scheme, that is, the second quantization scheme, can be determined as the quantization path in operation 715 . As a result of the determination in operation 712, if the prediction mode does not form 1, it is determined that the prediction mode is 2 to use the first quantization scheme and the second quantization scheme in a switching form. For example, when the speech signal of the current frame does not have the non-stationary characteristic, that is, when a prediction mode is 2 in the GC mode or in the VC mode, one of the first quantization scheme and the second quantization scheme can be determined as the quantization path considering a predictive error. To do this, it is determined in operation 713 whether a first predictive error between the current frame and the previous frame is greater than a first threshold. The first threshold can be defined in advance as an optimal value experimentally or through simulations. For example, in the case of a WB having an order of 16, the first threshold can be adjusted to 2,085,975. As a result of the determination in operation 713, if the first predictive error is greater than or equal to the first threshold, the first quantization scheme can be determined as the quantization path in operation 714. As a result of the determination in operation 713, if the first predictive error is not greater than the first threshold, the predictive scheme, that is, the second scheme and quantization can be determined as the quantization path in operation 715. Figure 7B is a flow chart illustrating an operation of the quantization path determiner in Figure 6, according to another exemplary embodiment. With reference to Figure 7B, operations 731 to 733 are identical to operations 711 to 713 of Figure 7A, and operation 734 in which a second predictive error between a frame immediately before a previous frame and a current frame to be compared with a second threshold is included additionally. The second threshold can be defined in advance as an optimal value experimentally or through simulations. For example, in the case of a WD having an order of 16, the second threshold can be adjusted (the first threshold xl.l). As a result of the determination in operation 734, if the second predictive error is greater than or equal to the second threshold, the net security scheme, that is, the first quantization scheme can be determined as the quantization path in operation 735. As a result of the determination in operation 734, if the second predictive error is not greater than the second threshold, the predictive scheme, that is, the second quantization scheme can be determined as the quantization path in operation 736. Although the number of prediction modes is 3 in Figures 7A and 7B, the present invention is not limited to that. However, in determining a quantization scheme, additional information can be used in addition to a prediction mode or a prediction error. Figure 8 is a block diagram of a quantization path determiner according to an exemplary embodiment. Referring to Figure 8, the quantization path determinator 800 may include a predictive error calculator 811, a spectrum analyzer 813 and a quantization scheme selector 815. Since the predictive error calculator 811 is identical to the predictive error calculator 611, in Figure 6, a detailed description is omitted. The spectrum analyzer 813 can determine the signal characteristics of a current frame by analyzing spectrum information. For example, in the spectrum analyzer 813, a weighted distance D between N (N is an integer greater than 1) previous frames and the current frame can be obtained by using spectral magnitude information in the frequency domain, and when the weighted distance is greater than a threshold, that is, when the interframe variation is large, the net security scheme can be determined as the quantization scheme. As the objects to be compared increase as N increases, the complexity increases as N increases. The weighted distance D can be obtained using Equation 12 below. To obtain a D-weighted distance with low complexity, the current frame can be compared with the previous frames using only spectral magnitudes around a frequency defined by LSF / ISF. In this case, an average value, a maximum value or an intermediate value of magnitudes of M frequency containers around the frequency defined by LSF / ISF can be compared with the previous tables. In equation 12, a weighting function Wk (i) can be obtained by Equation 3 described above and is identical to Wl (n) in Equation 3. In Dn, n denotes a difference between a previous frame and a current frame. One case and n = 1 indicates a weighted distance between an immediately previous frame and a current frame, and a case of n = 2 indicates a weighted distance between a second previous frame and the current frame. When a Dn value is greater than the threshold, it can be determined that the current frame has the non-stationary characteristic. The quantization scheme selector 815 can determine a quantization path of the current frame upon receipt of predictive errors provided from the predictive error calculator 811 and the signal characteristics, a prediction mode, and transmission channel information provided from of the spectrum analyzer 813. For example, priorities can be assigned to the information entered in the quantization scheme selector 815 to be considered sequentially when a quantization path is selected. For example, when a high Frame Error Rate (FER) mode is included in the transmission channel information, a net security scheme selection ratio can be set relatively high or only the net security scheme can be selected. The reason for selecting a net security scheme can be defined in a variable way by adjusting a threshold related to predictive errors. Figure 9 illustrates information regarding a channel state that can be transmitted on a network end when a codec service is provided. When the channel state is poor, channel errors increase and, as a result, the interframe variation can be large resulting in a frame occurrence. Thus, a selection ratio of the predictive scheme as a quantization path is reduced and a selection ratio of the net security scheme is increased. When the channel state is extremely bad, only the liquid security scheme can be used as the quantization path. To do this, a value indicating the channel status by combining a plurality of pieces of transmission channel information is expressed with one or more levels. A high level indicates a state in which the probability of a channel error is high. The simplest case is the case where the number of levels is 1, that is, a case where the channel state is determined as high FER mode by a high FER mode determiner 911, as shown in Figure 9. As the FER mode high indicates that the channel state is very unstable, coding is performed using the highest selection ratio of the net security scheme or using only the net security scheme. When the number of levels is plural, the reason for selecting the net security scheme can be established level by level. With reference to Figure 9, an algorithm for determining the high FER mode in the high FER mode determiner 911 can be performed using, for example, four pieces of information. In detail, the four pieces of information can be (1) Fast Feedback (FFB) information, which is a Hybrid Automatic Repeat Request (HARQ) feedback transmitted to a physical layer, (2) Slow Feedback (SFB) information , which is feedback from network signaling transmitted to a layer higher than the physical layer, (3) In-Band Feedback (ISB) information, which is signaling in-band feedback from an EVS 913 decoder at a distant end, and (4) highly sensitive frame information (HSF), which is selected by an EVS 915 encoder with respect to a specific crucial frame to be transmitted in a redundant way. Although FFB information and SFB information are independent for an EVS codec, ISB information and HSF information are dependent on the EVS codec and may require specific algorithms for the EVS codec. The channel status determination algorithm as the high FER mode using the four pieces of information can be expressed using, for example, the following code as tables 2-4. Table 2 Definitions Table 3 Table 4 As above, the EVS codec can be ordered to enter high FER mode based on the analysis information processed with one or more of the 4 pieces of information. The analysis information can be, for example, (1) SFBavg derived from a calculated average error rate of N frames using the SFB information, (2) FFBavg derived from a calculated average error rate of N frames using the information FFB, and (3) ISBavg derived from a calculated average error rate and Ni frames using the ISB information and thresholds Ts, Tf and Ti of the SFB information, the FFB information and the ISB information, respectively. It can be determined that the EVS codec is determined to enter high FER mode based on a result of comparing SFBavg, FFBavg and ISBavg with the thresholds Ts, Tf and Ti, respectively. For all conditions, HiOK on whether each codec commonly supports high FER mode can be verified. The high FER mode determiner 911 can be included as a component of the EVS 915 encoder or an encoder of another format. Alternatively, the high FER mode determiner 911 can be implemented on another external device except the EVS 915 encoder component or an encoder of another format. Figure 10 is a block diagram of an LPC 1000 coefficient quantizer according to another exemplary embodiment. Referring to Figure 10, the LPC coefficient quantizer 1000 may include a quantization path determiner 1010, a first quantization scheme 1030, and a second quantization scheme 1050. The quantization path determiner 1010 determines one of a first path including the net security scheme and a second path including the predictive scheme as a quantization path for a current frame, based on at least one of a predictive error and a mode coding. The first quantization scheme 1030 performs quantization without using interframe prediction when the first path is determined as the quantization path and can include a Multi-Stage Vector Quantizer (MSVQ) 1041 and a Lattice Vector Quantizer (LVQ) 1043 MSVQ 1041 can preferably include two stages. MSVQ 1041 generates a quantization index by performing approximately vector quantization of LSF coefficients from which a DC value is removed. The LVQ 1043 generates a quantization index by performing the quantization upon receipt of LSF quantization errors between the inverse QLSF coefficients emitted from the MSVQ 1041 and the LSF coefficients from which a DC value is removed. The final QLSF coefficients are generated by adding an output from MSVQ 1041 and an output from LVQ 1043 and then adding a DC value to the result of the addition. The first quantization scheme 1030 can implement a very efficient quantizer structure using a combination of MSVQ 1041 having excellent performance at a low bit rate although a large memory size is required for a codebook, and LVQ 1043 which it is efficient at low bit rate with a small memory size and low complexity. The second quantization scheme 1050 performs quantization using interframe prediction when the second path is determined as the quantization path and can include a BC-TCQ 1063, which has an intraframe predictor 1065, and an interframe predictor 1061. The interframe predictor 1061 you can use any of the AR method and the MA method. For example, a first order AR method is employed. A predictive coefficient is defined in advance, and a vector selected as an optimal vector in an earlier frame is used as a vector passed for prediction. LSF predictive errors obtained from predictive values of the interframe predictor 1061 are quantized by BC-TCQ 1063 having the intraframe predictor 1065. Consequently, a characteristic of BC-TCQ 1063 having excellent quantization performance with a small memory size and low complexity, at a high bit rate, it can be maximized. As a result, when the first quantization scheme 1030 and the second quantization scheme 1050 are used, an optimal quantizer can be implemented in correspondence with the characteristics of an input speech signal. For example, when 41 bits are used in the LPC coefficient quantizer 1000 to quantize a speech signal in GC mode with an 8-KHz WB, 12 bits and 28 bits can be allocated to MSVQ 1041 and LVQ 1043 of the first scheme. quantization 1030, respectively, except for 1 bit indicating quantization path information. In addition, 40 bits can be allocated to BC-TCQ 1063 of the second quantization scheme 1050 except for 1 bit indicating quantization path information. Table 5 shows an example in which the bits are allocated to a WB speech signal from an 8-KHz band. Table 5 Figure 11 is a block diagram of an LPC coefficient quantizer according to another exemplary embodiment. The LPC coefficient quantizer 1100 5 shown in Figure 11 has a structure opposite to that shown in Figure 10. Referring to Figure 11, the LPC coefficient quantizer 1100 may include a quantization path determiner 1110, a first quantization scheme 1130, and a second quantization scheme 1150. The quantization path determiner 1110 determines one of a first path including the net security scheme and a second path including the predictive scheme as a quantization path of a current frame, based on at least one of a predictive and error. a prediction mode. The first quantization scheme 1130 performs quantization without using interframe prediction when the first path is selected as the path and quantization and can include a Vector Quantizer (VQ) 1141 and a DC-TCQ 1143 having an intraframe predictor 1145. VQ 1141 generates a quantization index by performing approximately vector quantization of LSF coefficients from which a DC value is removed. BC-TCQ 1143 generates a quantization index by performing quantization by receiving LSF quantization errors between inverse QLSF coefficients emitted from VQ 1141 and the LSF coefficients from which a DC value is removed. The final QLSF coefficients are generated by adding an output from VQ 1141 and an output from BC-TCQ 1143 and then adding a DC value to the result of the addition. The second quantization scheme 1150 performs quantization using interframe prediction when the second path is determined as the quantization path and may include an LVQ 1163 and an interframe predictor 1161. Interframe predictor 1161 can be implemented in the same way as, or similar to, that. in Figure 10. LSF predictive errors obtained from predictive values of interframe predictor 1161 are quantized by LVQ 1163. Consequently, as the number of bits allocated to BC-TCQ 1143 is small, BC-TCQ 1143 has low complexity, and as LVQ 1163 has low complexity in a high bit rate, quantization can generally be performed with low complexity. For example, when 41 bits are used in the LPC coefficient quantizer 1100 to quantize a speech signal 5 in GC mode with an 8-KHz WB, 6 bits and 34 bits can be allocated to VQ 1141 and BC-TCQ 1143 of the first quantization scheme 1130, respectively, except for 1 bit indicating quantization path information. In addition, 40 bits can be allocated to LVQ 1163 of the second 10 quantization scheme 1150 except for 1 bit indicating quantization path information. Table 6 shows an example in which bits are allocated to a WB speech signal from an 8-KHz band. Table 6 An optional index related to VQ 1141 used in most coding modes can be obtained by searching for an index to minimize Ewerr (p) from Equation 13. In Equation 13, w (i) denotes a weighting function determined in the weighting function determiner (313 of Figure 3), r (i) denotes an input from VQ 1141, and c (i) denotes an output from VQ 1141. That is, an index to minimize weighted distortion between r (i) and c (i) is obtained. A measure of distortion d (x, y) used in BC-TCQ 1143 can be represented by Equation 14. According to an exemplary modality, the weighted distortion can be obtained by applying a weighting function wk to the distortion measure d (x, y) as represented by Equation 15. That is, an optimal index can be obtained by obtaining weighted distortion in all stages of BC-TCQ 1143. Figure 12 is a block diagram of an LPC coefficient quantizer according to another exemplary embodiment. Referring to Figure 12, the LPC coefficient quantizer 1200 may include a quantization path determiner 1210, a first quantization scheme 1230, and a second quantization scheme 1250. The quantization path determiner 1210 determines one of a first path including the net security scheme and a second path including the predictive scheme as a quantization path for a current frame, based on at least one of a predictive error and a mode of prediction. The first quantization scheme 1230 performs quantization without using interframe prediction when the first path is determined as the quantization path and can include a VQ or MSVQ 1241 and an LVQ or TCQ 1243. The VQ or MSVQ 1241 generates a quantization index by performing approximately vector quantization of LSF coefficients from which a DC value is removed. The LVQ or TCQ 124 3 generates a quantization index by performing quantization by receiving LSF quantization errors between inverse QLSF coefficients issued from VQ 1141 and the LSF coefficients from which a DC value is removed. Final QLSF coefficients are generated by adding an output from the VQ or MSVQ 1241 and an output from the LVQ of the TCQ 124 3 and then adding a DC value to the result of the addition. As the VQ or MSVQ 1241 has a good bit error rate although the VQ or MSVQ 1241 is highly complex and uses a large amount of memory, the number of stages in the VQ or MSVQ 1241 can increase from 1 to n considering the global complexity. For example, when only a first stage is used, the VQ or MSVQ 1241 becomes a VQ, and when two or more stages are used, the VQ or MSVQ 1241 becomes an MSVQ. In addition, as the LVQ or TCQ 1243 has low complexity, LSF quantization errors can be efficiently quantized. The second quantization scheme 1250 performs quantization using interframe prediction when the second path is determined as the quantization path and can include an interframe predictor 1261 and an LVQ or TCQ 1263. Interframe predictor 1261 can be implemented in the same or similar manner. to that in Figure 10. LSF predictive errors obtained from predictive values of interframe predictor 1261 are quantized by LVQ or TCQ 1263. Similarly, since LVQ or TCQ 1243 has low complexity, LSF predictive errors can be efficiently quantized. Consequently, quantization can generally be performed with low complexity. Figure 13 is a block diagram of an LPC coefficient quantizer according to another exemplary embodiment. Referring to Figure 13, the LPC coefficient quantizer 1300 may include a quantization path determiner 1310, a first quantization scheme 1030, and a second quantization scheme 1350. The quantization path determiner 1310 determines one of a first path including the net security scheme and a second path including the predictive scheme as a quantization path for a current frame, based on at least one of a predictive error and a prediction mode. The first quantization scheme 1330 performs quantization without using interframe prediction when the first path is determined as the quantization path, and since the first quantization scheme 1330 is the same as that shown in Figure 12, a description of it is omitted. The second quantization scheme 1350 performs quantization using the interframe prediction when the second path is determined as the quantization path and may include an interframe predictor 1361, a VQ or MSVQ 1363, and an LVQ or TCQ 1365. The interframe predictor 1361 can be implemented in the same way as, or similar to that in Figure 10. LSF predictive errors obtained using predictive values from interframe predictor 1361 are approximately quantized by the VQ or MSVQ 1363. An error vector between LSF predictive errors and the quantized LSF predictive errors issued from VQ or MSVQ 1363 it is quantized by LVQ or TCQ 1365. Similarly, as LVQ or TCQ 13 65 has low complexity, LSF predictive errors can be quantized efficiently. Consequently, quantization can generally be performed with low complexity. Figure 14 is a block diagram of an LPC coefficient quantizer according to another exemplary embodiment. In comparison to the LPC coefficient quantizer 1200 shown in Figure 12, the LPC coefficient quantizer 1400 has a difference in that a first quantization scheme 1430 includes a BC-TCQ 1443 having an intraframe predictor 1445 instead of the LVQ or TCQ 1243, and a second quantization scheme 1450 includes a BC-TCQ 1463 having an intraframe predictor 1465 instead of the LVQ or TCQ 1263. For example, when 41 bits are used in the LPC coefficient quantizer 1400 to quantize a speech signal in GC mode with a WB of 8 KHz, 5 bits and 35 bits can be allocated to a VQ 1441 and BC-TCQ 1443 of the first quantization scheme 1430, respectively, except for 1 bit indicating quantization path information. In addition, 40 bits can be allocated to BC-TCQ 1463 of the second quantization scheme 1460 except for 1 bit indicating quantization path information. Figure 15 is a block diagram of an LPC coefficient quantizer according to another exemplary embodiment. The LPC coefficient quantizer 1500 shown in Figure 15 is a concrete example of the LPC coefficient quantizer 1300 shown in Figure 13, in which an MSVQ 1541 from a first quantization scheme 153 0 and an MSVQ 1563 from a second quantization scheme 1550 have two stages. For example, when 41 bits are used in the LPC coefficient quantizer 1500 to quantize a speech signal in GC mode with an 8-KHz WB, 6 + 6 = 12 bits and 28 bits can be allocated to the two-stage MSVQ 1541 and an LVQ 1543 of the first quantization scheme 1530, respectively, except for 1 bit indicating quantization path information. In addition, 5 + 5 = 10 bits and 30 bits can be allocated to the two-stage MSVQ 1563 and an LVQ 1565 of the second quantization scheme 1550, respectively. Figures 16A and 16B are block diagrams of the LPC coefficient quantizers according to other exemplary modalities. Specifically, the LPC coefficient quantizers 1610 and 1630 shown in Figures 16A and 16B, respectively, can be used to form the net security scheme, that is, the first quantization scheme. The LPC coefficient quantizer 1610 shown in Figure 16A can include a VQ 1621 and a TCQ or BC-TCQ 1623 having an intraframe predictor 1625 and the LPC coefficient quantizer 1630 shown in Figure 16B can include a VQ or MSVQ 1641 and a TCQ or LVQ 1643. With reference to Figures 16A and 16B, VQ 1621 or VQ or MSVQ 1641 approximately quantizes the entire input vector with a small number of bits, and the TCQ or BC-TCQ 1623 or the TCQ or LVQ 1643 precisely quantifies the errors of LSF quantization. When only the net security scheme, that is, the first quantization scheme is used for each frame, a Viterbi List Algorithm (LVA) method can be applied for further performance improvement. That is, as there is room in terms of complexity compared to a switching method when only the first quantization scheme is used, the LVA method achieving performance improvement by increasing complexity in a search operation can be employed. For example, by applying the LVA method to a BC-TCQ, it can be established so that the complexity of an LVA structure is less than the complexity of a switching structure while increasing the complexity of the LVA structure. Figures 17A to 17C are block diagrams of LPC coefficient quantizers according to other exemplary modalities, which in particular have a BC-TCQ structure using a weighting function. Referring to Figure 17A, the LPC coefficient quantizer may include a weighting function determiner 1710 and a quantization scheme 1720 including a BC-TCQ 1721 having an intraframe predictor 1723. Referring to Figure 17B, the LPC coefficient quantizer may include a weighting function determiner 1730 and a quantization scheme 1740 including a BC-TCQ 1743, which has an intraframe predictor 1745, and an interframe predictor 1741. Here, 40 bits can be allocated to BC-TCQ 1743. Referring to Figure 17C, the LPC coefficient quantizer can include a weighting function determiner 1750 and a quantization scheme 1760 including a BC-TCQ 1763, which has an intraframe predictor 1765, and a VQ 1761. Here, 5 bits and 50 bits can be allocated to VQ 1761 and BC-TCQ 1763, respectively. Figure 18 is a block diagram of an LPC coefficient quantizer according to another exemplary embodiment. Referring to Figure 18, the LPC coefficient quantizer 1800 may include a first quantization scheme 1810, a second quantization scheme 1830, and a quantization path determiner 1850. The first quantization scheme 1810 performs quantization without using interframe prediction and can use a combination of an MSVQ 1821 and an LVQ 1823 to improve quantization performance. The MSVQ 1821 can preferably include two stages. MSVQ 1821 generates a quantization index by performing approximately vector quantization of the LSF coefficients from which a DC value is removed. The LVQ 1823 generates a quantization index by performing the quantization by receiving LSF quantization errors between inverse QLSF coefficients emitted from the MSVQ 1821 and the LSF coefficients from which a DC value is removed. Final QLSF coefficients are generated by adding an output from MSVQ 1821 and an output from LVQ 1823 and then adding a value to the result of the addition. The first quantization scheme 1810 can implement a very efficient quantizer structure using a combination of MSVQ 1821 having excellent performance at a low bit rate and LVQ 1823 which is efficient at low bit rate. The second quantization scheme 1830 performs quantization using the interframe prediction and can include a BC-TCQ 1843, which has an intraframe predictor 1845, and an interframe predictor 1841. LSF predictive errors obtained using predictive values of the interframe predictor 1841 are quantized by BC-TCQ 1843 having the intraframe predictor 1845. Consequently, a characteristic of BC-TCQ 1843 having excellent quantization performance at a high bit rate can be maximized. The quantization path determiner 1850 determines one of an output of the first quantization scheme 1810 and an output of the second quantization scheme 1830 as a final quantization emitted upon consideration of a prediction mode and a weighted distortion. As a result, when the first quantization scheme 1810 and the second quantization scheme 1830 are used, an optimal quantizer can be implemented in correspondence with characteristics of a 5-input speech signal. For example, when 43 bits are used in the LPC coefficient quantizer 1800 to quantize a speech signal in VC mode with an 8-KHz WD, 12 bits and 30 bits can be allocated to MSVQ 1821 and LVQ 1823 of the first scheme of quantization 1810, respectively, except for 110 bits indicating quantization path information. In addition, 42 bits can be allocated to BC-TCQ 1843 of the second quantization scheme 1830 except for 1 bit indicating quantization path information. Table 7 shows an example in which bits are allocated to a WB speech signal from an 8-KHz band. Table 7 Figure 19 is a block diagram of an LPC coefficient quantizer according to another exemplary embodiment. Referring to Figure 19, the LPC coefficient quantizer 1900 may include a first quantization scheme 1910, a second quantization scheme 1930, and a quantization path determinant 1950. The first quantization scheme 1910 performs quantization without using interframe prediction and can use a combination of a VQ 1921 and a BC-TCQ 1923 having an intraframe predictor 1925 to improve quantization performance. The second quantization scheme 1930 performs quantization using interframe prediction and may include a BC-TCQ 1943, which has an intraframe predictor 1945, and an interframe predictor 1941. The quantization path determinant 1950 determines a quantization path upon receipt of a weighted distortion and prediction mode using optimally quantized values obtained by the first quantization scheme 1910 and the second quantization scheme 1930. For example, it is determined whether a mode of quantization prediction of a current frame is 0, that is, if a speech signal from the current frame has a non-stationary characteristic. When the speech signal variation of the current frame is large as in TC mode or UC mode, since interframe prediction is difficult, the net security scheme, that is, the first quantization scheme 1910, is always determined as the quantization path. If the prediction mode of the current frame is 1, that is, if the speech signal of the current frame is in GC mode or in VC mode not having the non-stationary characteristic, the quantization path determiner 1950 determines one of the first scheme of quantization 1910 and the second quantization scheme 1930 as the quantization path considering the predictive errors. To do this, weighted distortion in the first quantization scheme 1910 is considered first so that the LPC coefficient quantizer 1900 is robust for frame errors. That is, if a weighted distortion value from the first quantization scheme 1910 is less than a predefined threshold, the first quantization scheme 1910 is selected regardless of a weighted distortion value from the second quantization scheme 1930. In addition, instead from a simple selection of a quantization scheme having a less weighted distortion value, the first quantization scheme 1910 is selected by considering the frame errors in a case of the same weighted distortion value. If the weighted distortion value of the first quantization scheme 1910 is a number of times greater than the weighted distortion value of the second quantization scheme 1930, the second quantization scheme 1930 can be selected. The determined number of times can, for example, be set to 1.15. As such, when the quantization path is determined, a quantization index generated by a quantization scheme for the determined quantization path is transmitted. On the assumption that the number of prediction modes is 3, it can be implemented to select the first quantization scheme 1910 when the prediction mode is 0, selecting the second quantization scheme 1930 when the prediction mode is 1, and selecting one from the first quantization scheme 1910 and the second quantization scheme 1930 when the prediction mode is 2, as the quantization path. For example, when 3 7 bits are used in the LPC coefficient quantizer 1900 to quantize a speech signal in GC mode with an 8-KHz WB, 2 bits and 34 bits can be allocated to VQ 1921 and BC-TCQ 1923 from first quantization scheme 1910, respectively, except for 1 bit indicating quantization path information. In addition, 36 bits can be allocated to BC-CQ 1943 of the second quantization scheme 1930 except for 1 bit indicating quantization path information. Table 8 shows an example in which the bits are allocated to a WD speech signal from an 8-KHz band. Table 8 Figure 20 is a block diagram of an LPC coefficient quantizer according to another exemplary embodiment. Referring to Figure 20, the LPC coefficient quantizer 2000 can include a first quantization scheme 2010, a second quantization scheme 2030, and a quantization path determiner 2050. The first quantization scheme 2010 performs quantization without using interframe prediction and can use a combination of a VQ 2021 and a BC-TCQ 2023 having an intraframe predictor 2025 to improve quantization performance. The second quantization scheme 2030 performs quantization using interframe prediction and can include an LVQ 2043 and an interframe predictor 2041. The quantization path determiner 2050 determines a quantization path upon receipt of a predicted and weighted distortion mode using optimally quantized values obtained by the first quantization scheme 2010 and the quantization scheme 2030. For example, when 4 3 bits are used in the LPC coefficient quantizer 2000 to quantize a speech signal in VC mode with a WD of 8 KHz, 6 bits and 36 bits can be allocated to VQ 2021 and BC-TCQ 2023 of the first quantization scheme 2010, respectively, except for 1 bit indicating quantization path information. In addition, 4 2 bits can be allocated to LVQ 204 3 of the second quantization scheme 2030 except for 1 bit indicating quantization path information. Table 9 shows an example in which the bits are allocated to a WB speech signal from an 8-KHz band. Table 9 Figure 21 is a block diagram of the quantizer type selector according to an exemplary embodiment. The quantizer type selector 2100 shown in Figure 21 can include a bit rate determiner 2110, a bandwidth determiner 2130, an internal sampling frequency determiner 2150 and a quantizer type determiner 2107. Each of the components it can be implemented through at least one processor (for example, a central processing unit (CPU)) because it is integrated in at least one module. The quantizer type selector 2100 can be used in a prediction mode 2 in which two quantization schemes are switched. The quantizer type selector 2100 can be included as a component of the LPC coefficient quantizer 117 of the coding apparatus 100 of Figure 1 or a component of the sound coding apparatus 100 of Figure 1. Referring to Figure 21, bit rate determiner 2110 determines a coding bit rate for a speech signal. The encoding bit rate can be determined for all frames or in a frame unit. A quantizer type can be changed depending on the encoding bit rate. The bandwidth determiner 2130 determines a bandwidth of the speech signal. The type of quantizer can be changed depending on the bandwidth of the speech signal. The internal sampling frequency determiner 2150 determines an internal sampling frequency based on an upper limit of the bandwidth used in a quantizer. When the bandwidth of the speech signal is equal to or greater than a WB, that is, the WB, a SWB or an FB, the internal sampling frequency varies according to whether the upper limit of the bandwidth encoding is 6.4 KHz or 8 KHz. If the upper limit of the encoding bandwidth is 6.4 KHz, the internal sampling frequency is 12.8 KHz, and if the upper limit of the encoding bandwidth is 8 KHz, the internal sampling frequency is 16 KHz . The upper limit of the encoding bandwidth is not limited to this. Quantizer type determiner 2107 selects one from an open loop and a closed loop as the quantizer type upon receipt of an output from bit rate determiner 2110, an output from bandwidth determiner 2130, and an output from determiner internal sampling frequency 2150. The quantizer type determiner 2107 can select the open loop as the quantizer type when the encoding bit rate is greater than a predetermined reference value, the bandwidth of the speech signal is equal or wider than the WB, and the internal sampling frequency is 16 KHz. Otherwise, the closed loop can be selected as the quantizer type. Figure 22 is a flowchart illustrating a method of selecting a quantizer type, according to an exemplary modality. Referring to Figure 22, in operation 2201, it is determined whether a bit rate is greater than a reference value. The reference value is set at 16.4 Kbps in Figure 22, but is not limited to that. As a result of the determination in operation 2201, if the bit rate is equal to or less than the reference value, the closed loop type is selected in operation 2209. As a result of the determination in operation 2201, if the bit rate is greater than the reference value, it is determined in operation 22 03 if the bandwidth of an input signal is greater than an NB. As a result of the determination in operation 2203, if the bandwidth of the input signal is NB, the closed loop type is selected in operation 2209. As a result of the determination in operation 2203, if the bandwidth of the input signal is wider than the NB, that is, if the bandwidth of the input signal is a WB, a SWB or an FB is determined in operation 2205 if an internal sampling frequency is a certain frequency. For example, in Figure 22, a certain frequency is defined as 16 KHz. As a result of the determination in operation 2205, if the internal sampling frequency is not the determined reference frequency, the closed loop type is selected in operation 2209. As a result of the determination in operation 2205, if the internal sampling frequency is 16 KHz, an open loop type is selected in operation 2207. Figure 23 is a block diagram of a sound decoding device according to an exemplary embodiment. Referring to Figure 23, the sound decoder 2300 may include a parameter decoder 2311, an LPC coefficient decoder 2313, a variable mode decoder 2315, and a post processor 2319. The sound decoder 2300 it can also include an error restorer 2317. Each component of the 2300 sound decoder device can be implemented by means of at least one processor (for example, a central processing unit (CPU)) because it is integrated in at least least one module. The parameter decoder 2311 can decode the parameters to be used for decoding from a bit stream. When an encoding mode is included in the bit stream, the parameter decoder 2311 can decode the encoding mode and the parameters corresponding to the encoding mode. LPC coefficient dequantization and excitation decoding can be performed in correspondence with the decoded encoding mode. The LPC coefficient quantizer 2313 can generate decoded LSF coefficients by quantizing the quantized ISF or LSF coefficients, quantized ISF or LSF quantization errors, or quantized ISF or LSF predictive errors included in the LPC parameters and generate LPC coefficients by converting the decoded LSF coefficients. The variable mode decoder 2315 can generate a synthesized signal by decoding the LPC coefficients generated by the LPC coefficient quantizer 2313. The variable mode decoder 2315 can perform decoding in correspondence with the coding modes as shown in Figures 2A to 2D according to with coding devices corresponding to the decoding devices. The 2317 error restorer, if included, can restore or cancel a current frame of a speech signal when errors occur in the current frame as a result of decoding the 2315 variable mode decoder. The 2319 post-processor (for example, a central processing unit (CPU)) can generate a final synthesized signal, that is, a restored sound, by performing various types of filtration and processing the speech quality enhancement of the synthesized signal. generated by the variable mode decoder 2315. Figure 24 is a block diagram of an LPC coefficient quantizer according to an exemplary embodiment. Referring to Figure 24, the LPC coefficient quantizer 2400 can include an ISF / LSF 2411 quantizer and a 2413 coefficient converter. The ISF / LSF 2411 quantizer can generate decoded ISF or LSF coefficients by quantizing the quantized ISF or LSF coefficients, quantized ISF or LSF quantization errors, or quantized ISF or LSF predictive errors included in the LPC parameters in correspondence with included quantization path information in a bit stream. The 2413 coefficient converter can convert the decoded ISF or LSF coefficients obtained as a result of dequantization through the ISF / LSF 2411 dequantizer for Spectral Immittanciometry Pairs (ISPs) or Linear Spectral Pairs (LSPs) and perform interpolation for each subframe. Interpolation can be performed using ISPs / LSPs from a previous frame and ISPs / LSPs from a current frame. The coefficient converter 2413 can convert the quantized and interpolated ISPs / LSPs of each subframe to LSP coefficients. Figure 25 is a block diagram of an LPC coefficient quantizer according to another exemplary embodiment. Referring to Figure 25, the LPC 2500 coefficient quantizer can include a quantizer path determiner 2511, a first quantization scheme 2513, and a second quantization scheme 2515. The quantization path determiner 2511 can provide LPC parameters to one of the first quantization scheme 2513 and the second quantization scheme 2515 based on the quantization path information included in a bit stream. For example, the quantization path information can be represented by 1 bit. The first 2513 quantization scheme can include an element to approximately quantify the LPC parameters and an element to accurately quantify the LPC parameters. The second 2515 quantization scheme may include an element to perform Blocked Restricted Trellis Coded Dequantization and an interframe predictive element with respect to the LPC parameters. The first quantification scheme 2513 and the second quantization scheme 2515 are not limited to the current exemplary modality and can be implemented using the reverse processes of the first and second quantization schemes; the exemplary modalities described above according to the coding apparatus corresponding to the decoding apparatus. A setting of the LPC 2500 coefficient quantizer can be applied regardless of whether a quantization method is an open loop type or a closed loop type. Figure 26 is a block diagram of the first quantization scheme 2513 and the second quantization scheme 2515 in the LPC 2500 coefficient quantizer of Figure 25, according to an exemplary embodiment. Referring to Figure 26, a first 2610 quantization scheme may include a Multi-Stage Vector Quantizer (MSVQ) 2611 to dequantize the quantized LSF coefficients included in the LPC parameters through the use of a first codebook index generated by an MSVQ (not shown) from a coding end (not shown) and a Lattice Vector Quantizer (LVQ) 2613 to dequantize the LSF quantization errors included in the LPC parameters using a second codebook index generated by a LVQ (not shown) of the coding end. The decoded LSF coefficients are generated by adding the quantized LSF coefficients obtained by MSVQ 2611 and the quantized LSF quantization errors obtained by LVQ 2613 and then adding a mean value, which is a predetermined DC value, to the addition result. A second 2630 quantization scheme may include a restricted block coded lattice quantizer (BC-TCQ) 2631 to dequantize the LSF predictive errors included in the LPC parameters using a third codebook index generated by a BC-TCQ (not shown) of the coding end, an intraframe predictor 2633, and an interframe predictor 2635. The dequantization process starts from the lowest vector among the LSF vectors, and the intraframe predictor 2633 generates a predictive value for a subsequent vector element by use of a decoded vector. The interframe predictor 2635 generates predictive values through interframe prediction using LSF coefficients decoded in a previous frame. Decoded, final LSF coefficients are generated by adding the LSF coefficients obtained by BC-TCQ 2631 and intraframe predictor 2633 and the predictive values generated by interframe predictor 2635 and then adding an average value, which is a predetermined DC value, to the addition result . The first quantification scheme 2610 and the second quantification scheme 2630 are not limited to the current exemplary modality and can be implemented using inverse processes of the first and second quantization schemes of the exemplary modalities described above according to the coding devices corresponding to the decoding devices. Figure 27 is a flow chart illustrating a quantization method according to an exemplary modality. Referring to Figure 27, in operation 2710, a quantization path for a received sound is determined based on a predetermined criterion before the quantization of the received sound. In an exemplary embodiment, one from a first path not using interframe prediction and a second path using interframe prediction can be determined. In operation 2730, a quantization path determined between the first path and the second path is checked. If the first path is determined as the quantization path as a result of the verification in operation 2730, the received sound is quantized using a first quantization scheme in operation 2750. On the other hand, if the second path is determined as the quantization path as a result of the verification in operation 2730, the received sound is quantized using a second quantization scheme in operation 2770. The process of determining the quantization path in operation 2710 can be carried out through several exemplary modalities described above. The quantization processes in operations 2750 and 2770 can be performed using the various exemplary modalities described above and first and second quantization schemes, respectively. Although the first and second paths are defined as selectable quantization paths in the current exemplary mode, several paths, including the first and second paths, can be established, and the flowchart in Figure 27 can be changed to match the plurality of paths settled down. Figure 28 is a flow chart illustrating a quantification method according to an exemplary modality. Referring to Figure 28, in operation 2810, the LPC parameters included in a bit stream are decoded. In operation 2830, a quantization path included in the bit stream is verified, and it is determined in operation 2850 whether the verified quantization path is a first path or a second path. If the quantization path is the first path as a result of the determination in operation 2850, the decoded LPC parameters are de-quantized using a first de-quantization scheme in operation 2870. If the quantization path is the second path as a result of the determination in operation 2850, the decoded LPC parameters are de-quantized using a second de-quantization scheme in operation 2890. The quantization processes in operations 2870 and 2890 can be performed using the reverse processes of the first and second quantization schemes of the various exemplary modalities described above, respectively, according to the coding apparatus corresponding to the decoding apparatus. Although the first and second paths are established as the quantization paths found in the current exemplary modality, several paths, including the first and second paths, can be established, and the flowchart in Figure 28 can be changed in correspondence with the plurality of established paths. The methods of Figures 27 and 28 can be programmed and can be performed by at least one processing device. In addition, exemplary modalities can be carried out in a frame unit or in a subframe unit. Figure 29 is a block diagram of an electronic device including an encoding module according to an exemplary embodiment. Referring to Figure 29, electronic device 2900 may include a communication unit 2910 and encoding module 2930. In addition, electronic device 2900 may further include a storage unit 2950 for storing a stream of sound bits obtained as a result of encoding according to the use of the sound bit stream. In addition, the 2900 electronic device can also include a 2970 microphone. That is, the 2950 storage unit and the 2970 microphone can be optionally included. The arbitrary decoding device (not shown), for example, a decoding module to perform a general decoding function or a decoding module according to an exemplary embodiment. The 2930 coding module can be implemented using at least one processor (for example, a central processing unit (CPU)) (not shown) because it is integrated with other components (not shown), included in the 2900 electronic device, such as one body. The communication unit 2910 can receive at least one of a sound or a coded bit stream provided from the outside or transmit at least one of a decoded sound or a bit stream of sound obtained as a result of coding via the 2930 encoding module. The 2910 communication unit is configured to transmit and receive data to and from an external electronic device via a wireless network, such as the wireless Internet, wireless intranet, a wireless telephone network, a wireless network. Local Wireless Area (WLAN), Wi-Fi, Direct Wi-Fi (WFD), third generation (3D), fourth generation (4G) communication, Bluetooth, Infrared Data Association (IrDA), Radio Frequency Identification ( RFID), Ultra Broadband (UWB), Zigbee, or Near Field Communication (NFC), or a wired network, such as a wired telephone network or wired Internet. The 2930 encoding module can generate a bit stream by selecting one of a plurality of paths, including a first path not using interframe prediction and a second path using interframe prediction, as a quantization path for a sound provided through the unit of communication 2910 or microphone 2970 based on a predetermined criterion before the quantization of the sound, quantizing the sound using a first quantization scheme and a second quantization scheme according to the selected quantization path, and encoding the sound quantized. The first quantization scheme can include a first quantizer (not shown) to approximately quantize the sound and one. second quantizer (not shown) to precisely quantize a quantization error signal between the sound and an output signal from the first quantizer. The first quantization scheme can include an MSVQ (not shown) to quantize the sound and an LVQ (not shown) to quantize a quantization error signal between the sound and an MSVQ output signal. In addition, the first quantization scheme can be implemented by one of the several exemplary modalities described above. The second quantization scheme can include an interframe predictor (not shown) to perform interframe sound prediction, an intraframe predictor (not shown) to perform intraframe prediction of predictive errors, and a BC-TCQ (not shown) to quantify errors predictive. Similarly, the second quantization scheme can be implemented by one of the several exemplary modalities described above. The 2950 storage unit can store a stream of encoded bits generated by the 2930 encoding module. The 2950 storage unit can store various programs necessary to operate the 2900 electronic device. The 2970 microphone can provide a user's sound outside the 2930 encoding module. Figure 30 is a block diagram of an electronic device including a decoding module, according to an exemplary embodiment. Referring to Figure 30, electronic device 3000 may include a communication unit 3010 and decoding module 3030. In addition, electronic device 3000 may further include a storage unit 3050 for storing a restored sound obtained as a result of decoding. according to the use of the restored sound. In addition, the electronic device 3000 can also include a 3070 speaker. That is, the 3050 storage unit and the 3070 speaker can optionally be included. The electronic device 3000 may further include an arbitrary coding module (not shown), for example, a coding module for carrying out a general coding function or a coding module according to an exemplary embodiment of the present invention. The 3030 decoding module can be implemented by at least one processor (for example, a central processing unit (CPU)) (not shown) because it is integral with other components (not shown) included in the electronic device 3000 as one body. The communication unit 3010 can receive at least one of a sound or a coded bit stream provided from the outside or transmit at least one of a restored sound obtained as a result of decoding the 3030 decoding module or a bit stream of sound obtained as a result of encoding. The communication unit 3010 can be implemented substantially like the communication unit 2910 of Figure 29. The decoding module 3 03 0 can generate a sound restored by the LPC decoding parameters included in a bit stream provided through the communication unit 3010, dequantifying the decoded LPC parameters using one of a first quantization scheme not using the prediction interframe and a second quantization scheme using interframe prediction based on the path information included in the bit stream, and decode the quantized LPC parameters in the decoded encoding mode. When an encoding mode is included in the bit stream, the 3030 decoding module can decode the quantized LPC parameters into a decoded encoding mode. The first quantization scheme can include a first quantizer (not shown) to approximately quantify the LPC parameters and a second quantizer (not shown) to precisely quantify the LPC parameters. The first quantization scheme can include an MSVQ (not shown) to dequantize the LPC parameters using a first codebook index and an LVQ (not shown) to dequantize the LPC parameters using a second codebook index . In addition, as the first quantization scheme performs an inverse operation of the first quantization scheme described in Figure 29, the first quantization scheme can be implemented by one of the inverse processes of the various exemplary modalities described above corresponding to the first quantization scheme according to with the coding apparatus corresponding to the decoding apparatus. The second quantization scheme can include a BC-TCQ (not shown) to dequantize the LPC parameters using a third codebook index, an intraframe predictor (not shown), and an interframe predictor (not shown). Similarly, as the second quantization scheme performs an inverse operation of the second quantization scheme described in Figure 29, the second quantization scheme can be implemented by one of the inverse processes of the various exemplary modalities described above corresponding to the second quantization scheme according to the coding apparatus corresponding to the decoding apparatus. The 3050 storage unit can store the restored sound generated by the 3030 decoding module. The 3050 storage unit can store various programs to operate the 3000 electronic device. The 3070 speaker can produce the restored sound generated by the 3030 decoder module to the outside. Figure 31 is a block diagram of an electronic device including an encoding module and a decoding module, according to an exemplary embodiment. The electronic device 3100 shown in Figure 31 may include a communication unit 3110, an encoding module 3120, and a decoding module 3130. In addition, the electronic device 3100 may further include a storage unit 3140 for storing a bit stream of sound obtained as a result of encoding or a restored sound obtained as a result of decoding according to the use of the sound bit stream or the restored sound. In addition, the electronic device 3100 may further include a microphone 3150 and / or a speaker 3160. The 3120 encoding module and the 3130 decoding module can be implemented by at least one processor (for example, a central processing unit (CPU) )) (not shown) because it is integrated with other components (not shown) included in the 3100 electronic device as a single body. As the components of the electronic device 3100, shown in Figure 31, correspond to the components of the electronic device 2900 shown in Figure 29 or the components of the electronic device 3000, shown in Figure 30, a detailed description of them is omitted. Each of the 2900, 3000, and 3100 electronic devices shown in Figures 29, 30, and 31 can include a voice-only terminal, such as a telephone or mobile phone, a music or broadcast-only device, such as a TV or an MP3 playback device or a hybrid terminal device for a voice-only terminal and a transmission or music-only device, but are not limited to that. In addition, each of the 2900, 3000 and 3100 electronic devices can be used as a client, a server, or a transducer displaced between a client and a server. When the 2900, 3000 or 3100 electronic device is, for example, a mobile phone, although not shown, the 2900, 3000 or 3100 electronic device may further include a user input unit, such as a keyboard, a display unit, to display information processed by a user interface or the mobile phone, and a processor (for example, a central processing unit (CPU)) to control the functions of the mobile phone. In addition, the mobile phone may further include a camera unit having an image capture function and at least one component for carrying out a function for the mobile phone. When the 2900, 3000 or 3100 electronic device is, for example, a TV, although not shown, the 2900, 3000 or 3100 electronic device may further include a user input unit, such as a keyboard, a display unit for displaying transmission information received, and a processor (for example, central processing unit (CPU)) to control all functions of the TV. In addition, the TV may also include at least one component to perform a TV function. Contents related to BC-TCQ incorporated in association with the quantization / dequantization of LPC coefficients are revealed in detail in United States Patent No. 7630890 (TCQ method restricted in blocks and method and equipment to quantify LSF parameter using the same in coding system speech). The contents in association with an LVA method are disclosed in detail in United States Patent Application No. 20070233473 (Multi-path coded trellis quantization method and multi-path trellis encoded quantizer using the same). The contents of United States Patent No. 7630890 and United States Patent Application No. 20070233473 are hereby incorporated by reference. The quantization method, the dequantization method, the encoding method and the decoding method according to the exemplary modalities can be recorded as computer programs and can be implemented in common digital computers that run the programs using a means of computer-readable recording. In addition, a data structure, a program command or a data file available in exemplary modes can be recorded on the computer-readable recording medium in several ways. The computer-readable recording medium is any data storage device that can store data that can later be read by a computer system. Examples of computer-readable recording media include magnetic recording media, such as hard disks, floppy disks and magnetic tapes, optical recording media, such as CD-ROMs and DVDs, magneto-optical recording media, such as magnetic and optical discs and hardware devices, such as ROM, RAM and flash memories, configured particularly to store and execute a program command. The computer-readable recording medium can also be a transmission medium for transmitting a signal in which a program command and a data structure are designated. Examples of the program command may include machine language codes created by a compiler and high-level language codes, which can be executed by a computer through an interpreter. Although the present inventive concept has been particularly shown and described with reference to its exemplary modalities, it will be understood by those of ordinary knowledge in the art that various changes in shape and details can be made, without departing from the essence and scope of the present inventive concept as defined by the following claims.
权利要求:
Claims (11) [0001] 1. Quantization apparatus for an input signal, which includes at least one of a speech characteristic and an audio characteristic in a coding device, the apparatus characterized by comprising: a processor configured to: compare a predictive error of information of linear prediction in the input signal with a limit, where the predictive error for a current frame is obtained from an inter-frame prediction contribution from the current frame, a weighting function and linear prediction information from the current frame; select one from a plurality of quantization modules, in an open circuit mode, in response to a result of the comparison of the predictive error with the limit; quantize the current frame without inter-frame prediction, based on the selected quantization module; quantize the current frame with the interframe prediction, based on the selected quantization module; and transmitting a bit stream including a quantization result, for reconstructing the input signal, and in which the interframe prediction is performed based on a previous frame. [0002] 2. Apparatus according to claim 1, characterized in that the selected quantization module comprises a lattice-structured quantizer and an intraframe predictor. [0003] 3. Apparatus, according to claim 1, characterized by the fact that the selected quantization module comprises a lattice-structured quantizer, an intraframe predictor and an interframe predictor. [0004] 4. Apparatus according to claim 1, characterized in that the selected quantization module comprises a lattice-structured quantizer and a vector quantizer. [0005] 5. Quantization apparatus for an input signal including at least one of a speech characteristic and an audio characteristic in a coding device, the apparatus characterized by comprising: a processor configured to: select one from a plurality of quantization modules based on a predictive error of linear prediction information in the input signal, in an open circuit mode, where the predictive error for a current frame is obtained from an inter-frame prediction contribution of the current frame, a weighting function and linear prediction information of the current frame, in an open circuit mode; quantize the current frame without inter-frame prediction, based on the selected quantization module; and quantize the current frame with the interframe prediction, based on the selected quantization module; and transmitting a bit stream including a quantization result, for reconstructing the input signal, where an input signal encoding mode is a speech encoding mode, and where interframe prediction is performed based on a frame previous. [0006] 6. Apparatus according to claim 5, characterized in that the selected quantization module comprises a lattice-structured quantizer and an intraframe predictor. [0007] Apparatus according to claim 5, characterized in that the selected quantization module comprises a lattice-structured quantizer, an intraframe predictor and an interframe predictor. [0008] Apparatus according to claim 5, characterized in that the selected quantization module comprises a lattice-structured quantizer and a vector quantizer. [0009] 9. Method of decoding for an encoded signal, which includes at least one of a speech characteristic and an audio characteristic in a decoding device, the method characterized by comprising: receiving a bit stream including the encoded signal; select, based on a mode information from the bit stream, one of a first decoding module and a second decoding module; when the first decoding module is selected, decode the bit stream, without intraframe prediction, to reconstruct a voice signal or an audio signal; and when the second decoding module is selected, decode the bit stream, with interframe prediction, for reconstruction of the encoded signal, in which the first decoding module comprises a lattice-structured decantant with block constraints, an intraframe predictor and a vector decquantizer, and in which the mode information is generated based on a predictive error of linear prediction information in a coding device, and in which the interframe prediction is performed based on a previous table. [0010] Method according to claim 9, characterized in that the second decoding module comprises a lattice-structured decantant with block constraints, an intraframe predictor, an interframe predictor and a vector decantant. [0011] Method according to claim 9, characterized in that a coding mode associated with the bit stream is a spoken coding mode.
类似技术:
公开号 | 公开日 | 专利标题 BR112013027093B1|2021-04-13|METHOD FOR QUANTIZING, METHOD FOR DECODING, METHOD FOR ENCODING, AND LEGIBLE RECORDING MEDIA BY NON-TRANSITIONAL COMPUTER KR101997037B1|2019-07-05|Apparatus for quantizing linear predictive coding coefficients, sound encoding apparatus, apparatus for inverse quantizing linear predictive coding coefficients, sound decoding method, recoding medium and electronic device BR112013027092B1|2021-10-13|QUANTIZATION METHOD FOR AN INPUT SIGNAL INCLUDING AT LEAST ONE OF A VOICE FEATURE AND AN AUDIO FEATURE IN AN ENCODING DEVICE, AND DECODING APPARATUS FOR AN ENCODED SIGNAL INCLUDING AT LEAST ONE OF A VOICE CHARACTERISTIC AUDIO IN A DECODING DEVICE
同族专利:
公开号 | 公开日 CN103620676B|2016-03-09| EP2700173A2|2014-02-26| BR112013027093A2|2020-08-11| MX354812B|2018-03-22| AU2017268591A1|2017-12-21| CA2833874C|2019-11-05| AU2017268591B2|2018-11-08| TW201729182A|2017-08-16| CN103620676A|2014-03-05| US10229692B2|2019-03-12| TW201243828A|2012-11-01| CN105513602B|2019-08-06| TWI672691B|2019-09-21| JP2017203997A|2017-11-16| KR101997038B1|2019-07-05| RU2647652C1|2018-03-16| WO2012144878A2|2012-10-26| AU2016203627A1|2016-06-16| BR122020023350B1|2021-04-20| US20120278069A1|2012-11-01| US20150162017A1|2015-06-11| JP2014519044A|2014-08-07| KR20120120086A|2012-11-01| KR20180063008A|2018-06-11| JP6178305B2|2017-08-09| AU2012246799B2|2016-03-03| RU2675044C1|2018-12-14| KR101863688B1|2018-06-01| CN105719654B|2019-11-05| SG194579A1|2013-12-30| US20170221494A1|2017-08-03| CN105719654A|2016-06-29| BR122020023363B1|2021-06-01| MX2013012300A|2013-12-06| TWI591621B|2017-07-11| US8977544B2|2015-03-10| CA2833874A1|2012-10-26| AU2016203627B2|2017-08-31| US9626980B2|2017-04-18| ZA201308709B|2021-05-26| EP3537438A1|2019-09-11| RU2013151673A|2015-05-27| RU2619710C2|2017-05-17| CN105513602A|2016-04-20| WO2012144878A3|2013-03-14| EP2700173A4|2014-05-28|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 JPH0551227B2|1986-03-31|1993-08-02|Fuji Photo Film Co Ltd| JPH08190764A|1995-01-05|1996-07-23|Sony Corp|Method and device for processing digital signal and recording medium| FR2729244B1|1995-01-06|1997-03-28|Matra Communication|SYNTHESIS ANALYSIS SPEECH CODING METHOD| JPH08211900A|1995-02-01|1996-08-20|Hitachi Maxell Ltd|Digital speech compression system| US5699485A|1995-06-07|1997-12-16|Lucent Technologies Inc.|Pitch delay modification during frame erasures| JP2891193B2|1996-08-16|1999-05-17|日本電気株式会社|Wideband speech spectral coefficient quantizer| US6889185B1|1997-08-28|2005-05-03|Texas Instruments Incorporated|Quantization of linear prediction coefficients using perceptual weighting| US5966688A|1997-10-28|1999-10-12|Hughes Electronics Corporation|Speech mode based multi-stage vector quantizer| CN1242379C|1999-08-23|2006-02-15|松下电器产业株式会社|Voice encoder and voice encoding method| US6581032B1|1999-09-22|2003-06-17|Conexant Systems, Inc.|Bitstream protocol for transmission of encoded voice signals| US6604070B1|1999-09-22|2003-08-05|Conexant Systems, Inc.|System of encoding and decoding speech signals| WO2001052241A1|2000-01-11|2001-07-19|Matsushita Electric Industrial Co., Ltd.|Multi-mode voice encoding device and decoding device| US7031926B2|2000-10-23|2006-04-18|Nokia Corporation|Spectral parameter substitution for the frame error concealment in a speech decoder| JP2002202799A|2000-10-30|2002-07-19|Fujitsu Ltd|Voice code conversion apparatus| US6829579B2|2002-01-08|2004-12-07|Dilithium Networks, Inc.|Transcoding method and system between CELP-based speech codes| JP3557416B2|2002-04-12|2004-08-25|松下電器産業株式会社|LSP parameter encoding / decoding apparatus and method| AU2002307889A1|2002-04-22|2003-11-03|Nokia Corporation|Generating lsf vectors| US7167568B2|2002-05-02|2007-01-23|Microsoft Corporation|Microphone array signal enhancement| CA2388358A1|2002-05-31|2003-11-30|Voiceage Corporation|A method and device for multi-rate lattice vector quantization| US8090577B2|2002-08-08|2012-01-03|Qualcomm Incorported|Bandwidth-adaptive quantization| JP4292767B2|2002-09-03|2009-07-08|ソニー株式会社|Data rate conversion method and data rate conversion apparatus| CN1186765C|2002-12-19|2005-01-26|北京工业大学|Method for encoding 2.3kb/s harmonic wave excidted linear prediction speech| CA2415105A1|2002-12-24|2004-06-24|Voiceage Corporation|A method and device for robust predictive vector quantization of linear prediction parameters in variable bit rate speech coding| KR100486732B1|2003-02-19|2005-05-03|삼성전자주식회사|Block-constrained TCQ method and method and apparatus for quantizing LSF parameter employing the same in speech coding system| US7613606B2|2003-10-02|2009-11-03|Nokia Corporation|Speech codecs| JP4369857B2|2003-12-19|2009-11-25|パナソニック株式会社|Image coding apparatus and image coding method| BRPI0510303A|2004-04-27|2007-10-02|Matsushita Electric Ind Co Ltd|scalable coding device, scalable decoding device, and its method| CN101794579A|2005-01-12|2010-08-04|日本电信电话株式会社|Long-term prediction encoding method, long-term prediction decoding method, and devices thereof| EP1720249B1|2005-05-04|2009-07-15|Harman Becker Automotive Systems GmbH|Audio enhancement system and method| WO2007102782A2|2006-03-07|2007-09-13|Telefonaktiebolaget Lm Ericsson |Methods and arrangements for audio coding and decoding| GB2436191B|2006-03-14|2008-06-25|Motorola Inc|Communication Unit, Intergrated Circuit And Method Therefor| RU2395174C1|2006-03-30|2010-07-20|ЭлДжи ЭЛЕКТРОНИКС ИНК.|Method and device for decoding/coding of video signal| KR100728056B1|2006-04-04|2007-06-13|삼성전자주식회사|Method of multi-path trellis coded quantization and multi-path trellis coded quantizer using the same| JPWO2007132750A1|2006-05-12|2009-09-24|パナソニック株式会社|LSP vector quantization apparatus, LSP vector inverse quantization apparatus, and methods thereof| TW200820791A|2006-08-25|2008-05-01|Lg Electronics Inc|A method and apparatus for decoding/encoding a video signal| US7813922B2|2007-01-30|2010-10-12|Nokia Corporation|Audio quantization| RU2420914C1|2007-03-14|2011-06-10|Ниппон Телеграф Энд Телефон Корпорейшн|Method and device for controlling coding speed and data medium storing programme to this end| KR100903110B1|2007-04-13|2009-06-16|한국전자통신연구원|The Quantizer and method of LSF coefficient in wide-band speech coder using Trellis Coded Quantization algorithm| WO2009044346A1|2007-10-05|2009-04-09|Nokia Corporation|System and method for combining adaptive golomb coding with fixed rate quantization| US20090136052A1|2007-11-27|2009-05-28|David Clark Company Incorporated|Active Noise Cancellation Using a Predictive Approach| US20090245351A1|2008-03-28|2009-10-01|Kabushiki Kaisha Toshiba|Moving picture decoding apparatus and moving picture decoding method| US20090319261A1|2008-06-20|2009-12-24|Qualcomm Incorporated|Coding of transitional speech frames for low-bit-rate applications| EP2144230A1|2008-07-11|2010-01-13|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|Low bitrate audio encoding/decoding scheme having cascaded switches| ES2683077T3|2008-07-11|2018-09-24|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|Audio encoder and decoder for encoding and decoding frames of a sampled audio signal| EP3640941A1|2008-10-08|2020-04-22|Fraunhofer Gesellschaft zur Förderung der Angewand|Multi-resolution switched audio encoding/decoding scheme| MX2012004116A|2009-10-08|2012-05-22|Fraunhofer Ges Forschung|Multi-mode audio signal decoder, multi-mode audio signal encoder, methods and computer program using a linear-prediction-coding based noise shaping.| BR122020024243B1|2009-10-20|2022-02-01|Fraunhofer-Gesellschaft Zur Forderung Der Angewandten Forschung E. V.|Audio signal encoder, audio signal decoder, method of providing an encoded representation of an audio content and a method of providing a decoded representation of an audio content.| BR122020023363B1|2011-04-21|2021-06-01|Samsung Electronics Co., Ltd|DECODIFICATION METHOD| MX2013012301A|2011-04-21|2013-12-06|Samsung Electronics Co Ltd|Apparatus for quantizing linear predictive coding coefficients, sound encoding apparatus, apparatus for de-quantizing linear predictive coding coefficients, sound decoding apparatus, and electronic device therefor.|KR101747917B1|2010-10-18|2017-06-15|삼성전자주식회사|Apparatus and method for determining weighting function having low complexity for lpc coefficients quantization| MX2013012301A|2011-04-21|2013-12-06|Samsung Electronics Co Ltd|Apparatus for quantizing linear predictive coding coefficients, sound encoding apparatus, apparatus for de-quantizing linear predictive coding coefficients, sound decoding apparatus, and electronic device therefor.| BR122020023363B1|2011-04-21|2021-06-01|Samsung Electronics Co., Ltd|DECODIFICATION METHOD| WO2014118171A1|2013-01-29|2014-08-07|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|Low-complexity tonality-adaptive audio signal quantization| WO2015054813A1|2013-10-14|2015-04-23|Microsoft Technology Licensing, Llc|Encoder-side options for intra block copy prediction mode for video and image coding| WO2015081699A1|2013-12-02|2015-06-11|华为技术有限公司|Encoding method and apparatus| US10368091B2|2014-03-04|2019-07-30|Microsoft Technology Licensing, Llc|Block flipping and skip mode in intra block copy prediction| EP3869506A1|2014-03-28|2021-08-25|Samsung Electronics Co., Ltd.|Method and device for quantization of linear prediction coefficient and method and device for inverse quantization| EP3142110A4|2014-05-07|2017-11-29|Samsung Electronics Co., Ltd.|Method and device for quantizing linear predictive coefficient, and method and device for dequantizing same| US9959876B2|2014-05-16|2018-05-01|Qualcomm Incorporated|Closed loop quantization of higher order ambisonic coefficients| WO2015192353A1|2014-06-19|2015-12-23|Microsoft Technology Licensing, Llc|Unified intra block copy and inter prediction modes| CN107077856B|2014-08-28|2020-07-14|诺基亚技术有限公司|Audio parameter quantization| KR20180026528A|2015-07-06|2018-03-12|노키아 테크놀로지스 오와이|A bit error detector for an audio signal decoder| CN109690673B|2017-01-20|2021-06-08|华为技术有限公司|Quantizer and quantization method| CN109473116B|2018-12-12|2021-07-20|思必驰科技股份有限公司|Voice coding method, voice decoding method and device| TWI723545B|2019-09-17|2021-04-01|宏碁股份有限公司|Speech processing method and device thereof|
法律状态:
2020-08-18| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]| 2021-02-02| B09A| Decision: intention to grant [chapter 9.1 patent gazette]| 2021-04-13| B16A| Patent or certificate of addition of invention granted|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 23/04/2012, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US201161477797P| true| 2011-04-21|2011-04-21| US61/477,797|2011-04-21| US201161481874P| true| 2011-05-03|2011-05-03| US61/481,874|2011-05-03| PCT/KR2012/003128|WO2012144878A2|2011-04-21|2012-04-23|Method of quantizing linear predictive coding coefficients, sound encoding method, method of de-quantizing linear predictive coding coefficients, sound decoding method, and recording medium|BR122020023363-0A| BR122020023363B1|2011-04-21|2012-04-23|DECODIFICATION METHOD| BR122020023350-8A| BR122020023350B1|2011-04-21|2012-04-23|quantization method| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|