![]() METHOD FOR FILTERING A MULTICAN AL AUDIO SIGNAL, SYSTEM TO IMPROVE THE SPEECH DETERMINED BY A MULTIC
专利摘要:
METHOD AND SYSTEM FOR DIMENSIONING THE EXPANDING OF RELEVANT SPEAKING CHANNELS IN MULTI-CHANNEL AUDIO. The present invention relates to a method and system for filtering a multichannel audio signal having a speech channel and at least one speechless channel, to improve speech intelligibility determined by the signal. In typical modalities, the method includes steps for determining at least one attenuation control value indicative of a measure of similarity between related speech content determined by the speech channel and related speech content determined by the speechless channel, and attenuating the channel speechless in response to at least one attenuation control value. Typically, the attenuated step includes scaling a raw attenuation control signal (for example, a magnification gain control signal) for the speechless channel in response to at least one attenuation control value. Some modalities are a programmed general purpose or special processor with software or firmware and / or otherwise configured to perform the filtering according to the invention. 公开号:BR112012022571B1 申请号:R112012022571-5 申请日:2011-02-28 公开日:2020-11-17 发明作者:Hannes Muesch 申请人:Dolby Laboratories Licensing Corporation; IPC主号:
专利说明:
Cross Reference for Related Applications [0001] This application claims priority for United States Provisional Patent Application No. 61/311, 437, filed on March 8, 2010, hereby incorporated by reference in its entirety. Background of the Invention 1. Field of the invention [0002] The present invention relates to systems and methods for improving the intelligibility of human speech (e.g., dialogue) determined by a multichannel audio signal. In some embodiments, the invention is a method and system for filtering an audio signal having a speech channel and a speechless channel to improve speech intelligibility determined by the signal, determining at least one attenuation control value indicative of a similarity measure between related speech content determined by the speech channel and related speech content determined by the speechless channel, and attenuating the speechless channel in response to the attenuation control value. 2. Background to the invention [0003] Throughout this description including in the claims, the term 'speech' is used in a broad sense to denote human speech. Thus, 'speech' determined by an audio signal is audio content of the signal that is perceived as human speech (for example, dialogue, monologue, singing, or other human speech) about signal reproduction by a speaker (or another transducer that emits sound). According to typical embodiments of the invention, speech audibility determined by an audio signal is relatively improved for other audio content (for example, instrumental music or speechless sound effects) determined by the signal, thereby improving intelligibility (for example, clarity or ease of understanding) of speech. [0004] Throughout this description included in the claims, the term "speech reinforcement content" of a multichannel audio signal channel is the content (determined by the channel) that reinforces the intelligibility or other perceived quality of the determined speech content by another channel (for example, a speech channel) of the signal. [0005] Typical modalities of the invention assume that the majority of speech determined by a multichannel audio signal input is determined by the channel in the center of the signal. This understanding is consistent with the convention in producing surround sound according to which most speech is generally placed on only one channel (the center channel), and the majority of music, ambient sound, and sound effects are generally mixed on all channels (for example, Left, Right, Surround Left and Surround Right channels as well as the Center Channel). [0006] Thus, the center channel of a multichannel audio signal will sometimes be referred to here as the "speak" channel and all other channels (for example, Left, Right, Left Surround, and Right Surround) channels of the signal will be some sometimes referred to here as "speechless" channels. Similarly, a "central" channel generated by the sum of the right and left channels of a sterile signal whose speech is shifted from the center sometimes referred to here as a "speech" channel, and a "later" channel generated by subtracting such a central channel from from the left sterile (or right) signal channel sometimes referred to here as a "speechless" channel. [0007] Throughout this description including in the claims, the expression of execution of a "linked" or given signal operation (for example, filtering, scaling, or transformation of the signals or data) is used in a broad sense to denote the execution of the operation directly on the signals or data, or on processed versions of the signals or data (for example, on versions of the signals that were subjected to preliminary filtering prior to the performance of the operation on it). [0008] Throughout this description including in the claims, the term "system" is used in a broad sense to denote a device, system, or subsystem. For example, a subsystem that implements a decoder can be referred to as a decoder system, and a system including such a subsystem (for example, a system that generates X signals in response to multiple inputs, where the subsystem generates M of the inputs and the other X - M inputs are received from an external source) can also referred to as a decoder system. [0009] Throughout the description included in the claims, the expression "reason" from a first value ("A") to a second value ("B") is used in a broad sense to denote A / B, or B / A , or a ratio of a scaled version or offset from an A and B to a scaled version or offset from another A and B (for example, (A + x) / (B + y), where x and y are offset values) . [00010] Throughout the description included in the claims, the expression "reproduction" of signals through the transducer that emits sounds (for example, speakers) denotes making the transducers produce sound in response to the signals, including performing any amplification required and / or other signal processing. [00011] When speech is heard in the presence of competing sounds (such as listening to a friend about the noise of a crowd in a restaurant), a portion of the acoustic characteristics in which signal the content of the phonemic speech (speech signals) are masked by competing sounds and are no longer available to the listener to decode the message. As the sound level increases competing relative to the speech level, the number of speech signals that are received correctly decreases and speech perception becomes progressively more uncomfortable until, at some competing sound level, the process of perceiving broken speech . While this relationship is valid for all listeners, the competing sound level that can be tolerated for any level of speech is not the same for all listeners. Some listeners, for example, those with hearing loss due to aging (presbyacusis) or those listening to a language that they acquired after puberty, are less able to tolerate competing sounds that are hearers with good hearing or those that operate in their native language . [00012] The fact that listeners differ in their ability to understand speech in the presence of competing sounds has implications for the level at which ambient sounds and background music in new or audio entertainment are mixed with speech. Hearers with hearing loss or those who operate in an external language often prefer a lower level of speechless audio than that provided by the content creator. [00013] To meet these special needs, it is known to apply attenuation (enlargement) for speechless channels of a multichannel audio signal, but lesser (or not) attenuation for the speech channel signal, to improve speech intelligibility determined by the signal. [00014] For example, PCT International Order Publication Number WO 2010/011377, named Hannes Muesch as inventor and assigned to Dolby Laboratories Licensing Corporation (published on January 28, 2010), describes that speechless channels (for example, channels left and right) of a multichannel audio signal can mask speech in the speech channel signals (for example, center channel) to the point that a desired level of speech intelligibility is no longer filled. WO 2010/011377 describes how to determine an attenuation of the function to be applied through the amplification circuit to speechless channels in an attempt to unmask speech in the speech channel while retaining as much of the content creator's intention as possible. The technique described in WO 2010/011377 is based on the understanding that content on a speechless channel never reinforces the intelligibility (or other perceived quality) of determined speech content by the speech channel. [00015] The present invention is based in part on the recognition that perceived, while this understanding is correct for the vast majority of multichannel audio content, it is not always valid. The inventor has recognized that when at least one speechless channel of a multichannel audio signal does not include content that reinforces the intelligibility (or other perceived quality) of determined speech content by the speech channel signal, signal filtering according to the method of WO 2010/011377 can negatively affect the listening experience of listening to the reproduced filtered signal. According to typical embodiments of the present invention, application of the method described in WO 2010/011377 is suspended or modified during times when the content does not conform to the underlying assumptions of the method of WO 2010/011377. [00016] There is a need for a method and system for filtering a multichannel audio signal to improve speech intelligibility in the common case at least one speechless channel of the audio signal includes content that reinforces the intelligibility of the speech content in the audio signal the speech channel. Brief Description of the Invention [00017] In a first class of modalities, the invention is a method for filtering a multichannel audio signal having a speech channel and at least one speechless channel, to improve speech intelligibility determined by the signal. The method includes steps of: (a) determining at least one attenuation control value indicative of a measure of similarity between the related speech content determined by the speech channel and the related speech content determined by at least one channel without speech signal multichannel audio; and (b) attenuation of at least one speechless channel of the multichannel audio signal in response to at least one attenuation control value. Typically, the attenuation step comprises scaling a raw attenuation control signal (for example, a magnification gain control signal) for the speechless channel in response to at least one attenuation control value. Preferably, the speechless channel is attenuated in order to improve speech intelligibility determined by the speech channel without undesirably attenuating speech reinforcement content determined by the speechless channel. In some embodiments, each attenuation control value determined in step (a) is indicative of a measure of similarity between related speech content determined by the speech channel and related speech content determined by a speechless channel of the audio signal, and step (b) includes the step of attenuating this channel without speech in response to said each attenuation control value. In some other embodiments, step (a) includes a step of deriving a speechless channel from at least one speechless channel of the audio signal, and at least one attenuation control value is indicative of a measurement of similarity between related speech content determined by the speech channel and related speech content determined by the channel derived from speechless. For example, the speechless derived channel can be generated by adding or otherwise mixing or combining at least two speechless channels of the audio signal. Determining each attenuation control value from a single channel derived from speechless can reduce the cost and complexity of implementing some modalities of the invention, relating to the cost and complexity of determining different subsets of a set of attenuation values from from different speechless channels. In modalities where the incoming audio signal has at least two speechless channels, step (b) can include the attenuation step of a subset of the speechless channels (for example, each speechless channel from which a derived channel speechless has been derived), or all speechless channels, in response to at least one attenuation control value (for example, in response to a single sequence of attenuation control values). [00018] In some modalities in the first class, step (a) includes a step of generating an attenuation control signal from a sequence of attenuation control values, each of the attenuation control values of a similarity measure between related speech content determined by the speech channel and related speech content determined through at least one speechless channel at a different time (for example, at a different time interval), and step (b) includes steps of: scaling a magnification gain control signal in response to the attenuation control signal to generate a scaled gain control signal, and application of the scaled gain control signal to attenuate at least one speechless channel (for example, state the scaled gain control signal for amplification circuit to control the attenuation of at least one channel without speech by the amplification circuit). For example, in some of these modalities, step (a) includes a step of comparing a first related speech of characteristic sequence (indicative of the related speech content determined by the speech channel) to a second sequence of indicative related speech characteristics (indicative of the related speech content determined through at least one speechless channel) to generate the attenuation control signal, and each of the attenuation control values indicated by the attenuation control signal is indicative of a measure of similarity between the first sequence of indicative related speech characteristics and the second sequence of indicative related speech characteristics at a different time (for example, at a different time interval). In some embodiments, each attenuation control value is a gain control value. [00019] In some modalities in the first class, each attenuation control value is monotonous related to the probability that at least one channel without speech of the audio signal is indicative of speech reinforcement content in which it reinforces intelligibility (or other quality perceived) of determined speech content through the speech channel. In some other modalities in the first class, each attenuation control value is monotonous related to an enhanced speech value expected from at least one speechless channel (for example, a measure of probability in which at least one speechless channel is indicative of speech reinforcement content, multiplied by a measure of perceived quality of improvement in which speech reinforcement content determined through at least one speechless channel would be provided for determined speech content by the multichannel signal). For example, where step (a) includes a comparison step a first sequence of related speech characteristics indicative of related speech content determined by the speech channel to a second sequence of related speech characteristics indicative of determined related speech content through at least one speechless channel, the first sequence of indicative related speech characteristics can be a sequence of speech probability values, each indication of the probability at a different time (for example, at a different time interval) in which the speech channel is indicative of speech (rather than other audio content that speaks), and the second sequence of indicative related speech characteristics can also be a sequence of speech probability values, each indication of the probability at a different time ( for example, at a different time interval) where at least one speechless channel is indicative of fa over there. Several methods of automatically generating such sequences of speech probability values from an audio signal are known. For example, such a method is described by Robinson and Vinton in "Automated Speech / Other Discrimination for Loudness Monitoring" (Audio Engineering Society, Preprint 6437 of Convention 118, in May 2005). [00020] Alternatively, it is considered that the sequences of speech probability values could be created manually (for example, by the content creator) and transmitted alongside the multichannel audio signal to the end user. [00021] In a second class of modalities, in which the multichannel audio signal has a speech channel and at least two speechless channels including a first speechless channel and a second speechless channel, the inventive method includes steps of: ( a) determination of at least a first attenuation control value indicative of a measure of similarity between related speech content determined by the speech channel and second related speech content determined by the first speechless channel (for example, including by comparing a first sequence of related speech characteristics indicative of related speech content determined by the speech channel for a second sequence of related speech characteristics indicative of the content of the second related speech); and (b) determining at least a second attenuation control value indicative of a measure of similarity between related speech content determined by the speech channel and third related speech content determined by the second speechless channel (for example, including comparison of a third sequence of related speech characteristics indicative of related speech content determined by the speech channel for a fourth sequence of related speech characteristics indicative of the third related speech content, where the third sequence of indicative related speech characteristics can be identical to the first sequence of related speech characteristics indicative of step (a)). Typically, the method includes the attenuation step of the first speechless channel (for example, scaling the first speechless channel) in response to at least a first attenuation control value and attenuating the second speechless channel (for example, scaling attenuation of the second speechless channel) in response to at least a second attenuation control value. Preferably, each speechless channel is attenuated in order to improve the speech intelligibility determined by the speechless channel undesirably attenuating determined speech reinforcement content or speechless channel. [00022] In some modalities in the second classes: [00023] the at least one first attenuation control value determined in step (a) is a sequence of attenuation control values, and each of the attenuation control values is a gain control value to scale the amount of gain applied to the first speechless channel through the amplification circuit in order to improve the speech intelligibility determined by the speechless channel undesirably reducing the speech reinforcement content determined by the first speechless channel; and [00024] the at least one second attenuation control value determined in step (b) is a sequence of the second attenuation control values, and each of the second attenuation control values is a gain control value for scaling the quantity the gain applied to the second speechless channel through the amplification circuit in order to improve the speech intelligibility determined by the speechless channel undesirably reducing the speech reinforcement content determined by the second speechless channel. [00025] In a third class of modalities, the invention is a method for filtering a multichannel audio signal having a speech channel and at least one speechless channel, to improve speech intelligibility determined by the signal. The method includes steps of: (a) comparing a speech channel characteristic and a speechless characteristic to generate at least one attenuation value to control the speechless attenuation channel relative to the speech channel; and (b) adjusting at least one attenuation value in response to at least one speech enhancement probability value to generate at least one attenuation value adjusted to control the speechless attenuation channel relative to the speech channel. Typically, the adjustment step is (or includes) scaling each said attenuation value in response to a said speech improvement probability value to generate an adjusted attenuation value. Typically, each speech improvement probability value is indicative of (for example, monotone related to) a probability that the speechless channel (or a speechless channel derived from the speechless channel or from a set of channels without speech from the audio input signal) is indicative of speech reinforcement content (content that reinforces the intelligibility or other perceived quality of determined speech content by the speech channel). In some modalities, the speech improvement probability value is indicative of an enhanced speech value expected from the speechless channel (for example, a measure of probability that the speechless channel is indicative of speech reinforcement content multiplied by a measure of the perceived quality of improvement in which the speech reinforcement content determined by the speechless channel would be provided for the determined speech content by the multichannel audio signal). In some modalities in the third class, at least one speech improvement probability value is a sequence of comparison values (for example, difference values) determined by a method including a step of comparing a first sequence of indicative related speech characteristics of related speech content determined by the speech channel for a second sequence of related speech characteristics indicative of related speech content determined by the speechless channel, and each of the comparison values is a measure of similarity between the first sequence of speech characteristics indicative related and the second sequence of indicative related speech characteristics at a different time (for example, at a different time interval). In typical modalities in the third class, the method also includes the attenuation step - the speechless channel in response to at least one adjusted attenuation value. Step (b) may comprise scaling to at least one attenuation value (which is typically, or is determined by, an amplification gain control signal or another raw attenuation control signal) in response to at least one probability value of speech improvement. [00026] In some modalities in the third class, each attenuation value generated in step (a) is a first factor indicating the amount of speechless attenuation channel needed to limit the ratio of the signal strength in the speechless channel to the power of the signal in the speech channel not to exceed a predetermined limit, scaled by a second monotonous factor related to the probability of the speech channel being indicative of speech. Typically, the adjustment step in these modalities is (or includes) dimensioning each said attenuation value by a said speech improvement probability value to generate an adjusted attenuation value, where the speech improvement probability value is a factor monotonous related to one of: a probability that the speechless channel is indicative of speech reinforcing content (content that reinforces intelligibility or other perceived quality of determined speech content by the multichannel signal), and an enhanced speech value expected from the channel speechless (for example, a measure of probability that the speechless channel is indicative of speech reinforcement content multiplied through a measure of perceived quality of improvement in which speech reinforcement content on the speechless channel would be provided for given content of speech by the multichannel signal). [00027] In some modalities in the third class, each attenuation value generated in step (a) is a first factor indicative of an amount (for example, the minimum amount) of attenuation channel without enough speech to make the predicted intelligibility of the speech determined by the speech channel in the presence of content determined by the speechless channel to exceed a predetermined threshold value, scaled by a second monotonous factor related to the probability of the speech channel being indicative of speech. Preferably, the predicted speech intelligibility determined by the speech channel in the presence of content determined by the speechless channel is determined according to a predicted model of intelligibility on a psycho-acoustic basis. Typically, the adjustment step in these modalities is (or includes) dimensioning each said attenuation value by a said speech improvement probability value to generate an adjusted attenuation value, where the speech improvement probability value is a factor monotonous related to one of: a probability that the speechless channel is indicative of speech reinforcing content, and an enhanced speech value expected from the speechless channel. [00028] In some modalities in the third class, step (a) includes the generation steps for each said attenuation value including determining a power spectrum (indicative of power as a frequency function) of each of the speech channel and the channel without speech, and executing a frequency domain determination of the attenuation value in response to each said power spectrum. Preferably, the attenuation values generated and thereby determine attenuation as a frequency function to be applied to the frequency of components of the speechless channel. [00029] In a class of modalities, the invention is a method and system for improving speech determined by a multichannel audio input signal. In some embodiments, the inventive system includes an analysis module (subsystem) configured to analyze the multichannel signal input to generate attenuation control values, and an attenuation subsystem. The attenuation subsystem is configured to apply attenuation amplification, directed at least some of the attenuation control values, to each speechless channel of the input signal to generate a filtered audio output signal. In some embodiments, the attenuation subsystem includes an amplification circuit (driven by at least some of the attenuation control values) coupled and configured to apply attenuation (amplification) to each channel without speaking the input signal to generate the audio output signal filtered. The amplification circuit is control values directed in the sense that the attenuation that applies to speechless channels is determined through current values of the control values. [00030] In typical embodiments, the inventive system is or includes a programmed general purpose or spatial processor with software (or firmware) and / or otherwise configured to perform a modality of the inventive method. In some embodiments, the inventive system is a general purpose processor, coupled to receive input data indicative of the audio input signal and programmed (with appropriate software) to generate output data indicative of the audio output signal in response to data from entry through the execution of a modality of the inventive method. In other modalities, the inventive system is implemented through a suitable configuration (for example, by programming) a configurable digital audio signal (DSP) process. DSP audio can be conventional DSP audio that is configured (for example, programmable through appropriate software or firmware, or otherwise configured in response to control data) to perform any of a variety of operations on the input audio. In operation, a DSP audio that has been configured to perform active speech enhancement according to the coupled invention to receive the audio input signal, and the DSP typically performs a variety of operations on the input audio in addition to (as well as) speech improvement. According to various embodiments of the invention, a DSP audio is operable to perform a modality of the inventive method after being configured (for example, programmed) to generate an output audio signal in response to an input audio signal by executing the method on an audio input signal. [00031] Aspects of the invention include a system configured (for example, programmed) to carry out any modality of the inventive method, and a computer-readable medium (for example, a disc) that stores codes for implementing any modality of the inventive method. Brief Description of Drawings [00032] Figure 1 is a block diagram of an inventive system modality. [00033] Figure 1A is a block diagram of another modality of the inventive system. [00034] Figure 2 is a block diagram of another modality of the inventive system. [00035] Figure 2A is a block diagram of another modality of the inventive system. [00036] Figure 3 is a block diagram of another modality of the inventive system. [00037] Figure 4 is a block diagram of a digital audio signal (DSP) process that is a modality of the inventive system. [00038] Figure 5 is a block diagram of a computer system, including a computer-readable storage medium 504 that stores computer code for system programming to carry out a modality of the inventive method. Detailed Description of Preferred Modalities [00039] Many embodiments of the present invention are technologically possible. It will be apparent to those of ordinary skill in the art from this description how to implement them. Modalities of the inventive system, method, and medium will be described with reference to figures 1.1A, 2, 2 A, and 3-5. [00040] The inventor has observed that some multichannel audio content has a difference, yet related speech content in the speech channel and at least one speechless channel. For example, multichannel audio recordings of some stage shown are mixed in such a way that "dry" speech (ie speech without noticeable reverberation) is placed in the speech channel (typically, the central channel, C, of the signal) and the same speech, but with a significant verbalization component ("wet" speech) is placed on the channels without signal speech. In a typical scenario, dry speech is the signal from the microphone in which the stage plays keep close to your mouth and wet speech is the signal from microphones placed in the audience. Wet speech is related to dry speech since it is the performance as can be heard by the audience at the venue. Although it is different from dry speech. Typically wet speech is relatively late to dry speech, and has a different spectrum and different additive components (for example, audience waste and reverberation). [00041] Depending on the relative levels of dry and wet speech, it is possible that the wet speech masks component of the dry speech component to a degree in which the attenuation of speechless channels in the amplification circuit (for example, as in the method described in the aforementioned WO 2010/011377) undesirably attenuates the wet speech signal. Although the components of dry and wet speech can be described as separate entities, a perceptive listener brings the two together and hears them as a single flow of speech. Attenuating the wet speech component (for example, in the magnification circuit) they can have the effect of reducing the perceived intensity of the speech flow connected along with collapse in its image width. The inventor has recognized that for the multichannel audio signal having dry and wet speech components of the indicated type, it would often be more perceptive pleasant as well as more conducive to speech intelligibility if the level of wet speech components was not changed during the improvement of speech signal processing. [00042] The invention is based in part on the recognition that when at least one speechless channel of a multichannel audio signal includes content that reinforces the intelligibility (or other perceived quality) of determined speech content by the speech channel signal , filtering the signal from the speechless channel using an amplification circuit (for example, according to the method of WO 2010/011377) can negatively affect the listening experience of listening to the reproduced filtered signal. According to typical modalities of the invention, attenuation (in amplification circuit) of at least one speechless channel of a multichannel audio signal is suspended or modified during times when the speechless channel includes speech reinforcing content (content that reinforces speech). intelligibility or other perceived quality of determined speech content by the speech channel signal). Sometimes when the speechless channel does not include speech-reinforcing content (or does not include speech-reinforcing content that meets a predetermined criterion), the speechless channel is generally attenuated (attenuation is not suspended or modified). [00043] A typical multichannel signal (having a speech channel) for conventional filtering in the amplification circuit to be inadequate is one including at least one speechless channel that carries speech signals that are substantially identical to speech signals in the speech channel . According to typical embodiments of the present invention, a sequence of related speech characteristics in the speech channel is compared to a sequence of related speech characteristics in the speechless channel. A substantial similarity of the two characteristic sequences indicates that the speechless channel (that is, the signal in the speechless channel) contributes useful information for understanding speech in the speech channel and in which the speechless attenuation channel should be avoided. [00044] In order to appreciate the significance of examining the similarity between such characteristics of the related speech sequences instead of the proper signs, it is important to recognize that "dry" and "wet" speech content (determined through speech and speechless channels) is not it is identical; the indicative signs of the two types of speech content are typically temporally compensating, and have undergone different filtering processes and have had different strange additional components. For this reason, a direct comparison between the two signals will yield a low similarity, regardless of whether the speechless channel contributes speech signals that are the same as the speech channel (as in the case of dry and wet speech), unrelated signals speech (as in the case of two unrelated voices in speech and speechless channel [for example, a target conversation in the speech channel and preceding murmur in the speechless channel]), or no speech signal at all (for example, the speechless channel carries music and effects). Basing the comparison on speech characteristics (as in preferred embodiments of the present invention), a level of abstraction is obtained that lessens the impact of relevant signal aspects, such as small amounts of delay, spectrum differences, and added strange signals. Thus, preferred implementations of the invention typically generate at least two streams of speech characteristics: one representing the signal in the speech channel; and at least one signal representing a speechless channel. [00045] A first embodiment (125) of the inventive system will be described with reference to figure 1. In response to a multichannel audio signal comprising a speech channel 101 (central channel C) and two speechless channels 102 and 103 (channels left and right Le R), figure 1 system filters the speechless channels to generate a multichannel filtered audio signal comprising speech channel 101 and speechless channels 118 and 119 filtered (right and left channels L 'and R' filtered). Alternatively, one or both of the speechless channels 102 and 103 can be another type of speechless channel of a multichannel audio signal (for example, rear left and / or rear right channels of a 5.1 audio signal channel) or it can be a derived speechless channel that is derived from (for example, it is a combination of) any of many different subsets of speechless channels of a multichannel audio signal. Alternatively, the inventive system modality can be implemented to filter only one speechless channel, or more than two speechless channels, from a multichannel audio signal. [00046] With reference to new figure 1, speechless channels 102 and 103 are affirmed for amplification amplifiers 117 and 116, respectively. In operation, amplifier amplifier 116 is directed to a control signal S3 (which is indicative of a sequence of control values, and is thus also referred to as a sequence of control value S3) output from the multiplication element 114, and amplifier amplifier 117 is directed control signal S4 (which is indicative of a sequence of control values, and is thus also referred to as sequence of control value S4) output from multiplication element 115. [00047] The power of each channel of the multichannel input signal is measured with a bank of power estimators (104, 105, and 106) and expressed on a logarithmic scale [dB]. These power estimators can implement a smoothing mechanism, such as a droplet integrator, so that the measured power level reflects the average power level over the duration of a sentence or an entire passage. The signal power level in the speech channel is subtracted from the power level in each of the speechless channels (by subtraction elements 107 and 108) to give a measure of the power ratio between the two types of signals. The output of element 107 is a measure of the ratio of power on the speechless channel 103 to power on the speech channel 101. The output of element 108 is a measure of the ratio of power on the speechless channel 102 to the power on speech channel 101. [00048] Comparison circuit 109 determines for each speechless channel the number of decibels (dB) by which the speechless channel must be attenuated in order for its power level to remain at least ψ dB below the signal power level in the speech channel (where the symbol also known as theta script, denotes a predetermined threshold value). In an implementation of circuit 109, addition element 120 adds the limit value ψ (stored in element 110, which can be a register) to the difference in power level (or "margin") between speechless channel 103 and speech channel 101 , and addition element 121 adds the limit value ψ to the difference in power level between speechless channel 102 and speech channel 101. Elements 111-1 and 112-1 changing the output signal of addition elements 120 and 121 , respectively. This signal change operation converts attenuation values to gain values. Elements 111 and 112 limit each result to be equal to or less than zero (the output of element 111-1 is asserted for limiter 111 and element output 112-1 is asserted to limit 112). The current value C1 of the limiter 111 output determines the gain (attenuation denied) in the dB that must be applied to the speechless channel 103 to keep its power level cp dB below the power level of the speech channel 101 (in the relevant time, or in the relevant time window, the multichannel input signal). The current value C2 of the limiter output 112 determines the gain (attenuation denied) in dB that must be applied to the speechless channel 102 to keep its power level <t> dB below the power level of the speech channel 101 (in time relevant, or in the relevant time window, multi-channel input signal). A typical suitable value for ψ is 15 dB. [00049] Because there is a single relationship between the measurement expressed on a logarithmic scale (dB) and that the same measurement expressed on a linear scale, a circuit (either programmed or otherwise configured processor) that is equivalent to elements 104, 105 , 106, 107, 108, and 109 of figure 1 can be constructed in which the power, gain and all limits are expressed on a linear scale. In such an implementation any difference in level is replaced by reasons of linear measures. Alternative implementations can replace the power measurement with measures that are related to the signal resistance, such as the absolute signal value. [00050] Signal C1 of limiter output 111 is a raw attenuation control signal for speechless channel 103 (a gain control signal for amplifier amplifier 116) that could be affirmed directly to amplifier 116 to control the attenuation of amplification of speechless channel 103. Signal C2 of limiter output 112 is a raw attenuation control signal for speechless channel 102 (a gain control signal for amplification amplifier 117) that could be affirmed directly to the amplifier 117 to control the widening attenuation of the speechless channel 102. [00051] According to the invention, however, raw attenuation control signals C1 and C2 are scaled on multiplication elements 114 and 115 to generate gain control signals S3 and S4 to control the amplification attenuation of the speechless channels via of amplifiers 116 and 117. Signal C1 is scaled in response to a sequence of attenuation control values S1, and signal C2 is scaled in response to a sequence of attenuation control values S2. Each control value S1 is affirmed from the output of the processing element 134 (to be described below) to a multiplication element input 114, and signal C1 (and thus each "raw" gain control value C1 determined in this way ) is affirmed from limiter 111 to the other input of element 114. Element 114 scales the current value C1 in response to the current value S1 by multiplying these values together to generate the current value S3, which is affirmed to amplifier 116. Each value of control S2 is affirmed from the output of processing element 135 (to be described below) to a multiplication element input 115, and signal C2 (and thus each "raw" gain control value C2 determined in this way) is affirmed from limiter 112 to the other element input 115. Element 115 scales the current value C2 in response to the current value S2 by multiplying these values together to generate the current value S4, which is affirmed to the amplifier. ador 117. [00052] Control values S1 and S2 are generated according to the invention as follows. In speech probability processing elements 130, 131, and 132, a speech probability signal (each of the signals P, Q, and T of figure 1) is generated for each channel of the multichannel input signal. Speech probability signal P is indicative of a sequence of speech probability values for speechless channel 102; speech probability signal Q is indicative of a sequence of speech probability values for speech channel 101, and speech probability signal T is indicative of a sequence of speech probability values for speechless channel 103. [00053] Speech probability signal Q is a monotonous value related to the probability that the signal in the speech channel is in fact indicative of speech. Speech probability signal P is a monotonous value related to the probability that the signal on the speechless channel 102 is speech, and speech probability signal T is a monotonous value related to the probability that the signal on the speechless channel 103 is speech. . Processors 130, 131, and 132 (which are typically identical for each other, but not identical for each other in some embodiments) can implement any of several methods to automatically determine the probability that the input signal is affirmed in them indicative of speech . In one embodiment, speech probability processors 130, 131, and 132 are identical to each other, processor 130 generates the signal P (from the information in the speechless channel 102) in such a way that signal P is indicative of a sequence of speech probability values, each monotone related to the probability that the signal on channel102 at a different time (or time window) is speech, processor 131 generates the Q signal (from the information on channel 101) in such a way that signal Q is indicative of a sequence of speech probability values, each monotone related to the probability that the signal on channel 101 at a different time (or time window) is speech, processor 132 generates the T signal (from the information in the speechless channel 103) such that signal T is indicative of a sequence of speech probability values, each monotone related to the probability that the signal on channel 102 at a different time (or time window) is speech, and each one of the processors 130, 131, and 132 so by implementing (over the relevant one of channels 102, 101, and 103) the mechanism described by Robinson and Vinton in "Automated Speech / other Discrimination for Monitoring Loudness" (Audio Engineering Society, Preprint number 6437 of the Convention 118, in May 2005). Alternatively, signal P can be created manually, for example, by the content creator, and transmitted alongside the audio signal on channel 102 to the end user, and processor 130 can simply extract such a signal P previously created from channel 102 (or processor 130 can be eliminated and the previously created P signal directly affirmed to processor 134). Similarly, signal Q can be created manually and transmitted to the next audio signal on channel 101, processor 131 can simply extract such a previously created signal Q from channel 101 (or processor 131 can be eliminated and the previously created signal Q directly affirmed for processors 134 and 135), T signal can be created manually and transmitted to the next audio signal on channel 103, and processor 132 can simply extract such previously created T signal from channel 103 (or processor 132 can be eliminated and the previously created signal T directly affirmed for processor 135). [00054] In a typical implementation of processor 134, speech probability values determined by P and Q signals are paired compared to determine the difference between the current values of the P and Q signals for each of a sequence of current values of the P signal In a typical implementation of processor 135, speech probability values determined by T and Q signals are paired compared to determine the difference between the current values of T and Q signals for each of a sequence of current values of the Q signal. as a result, each of processors 134 and 135 generates a time sequence of different values for a pair of speech probability signals. [00055] Processors 134 and 135 are preferably implemented to smooth each such sequence of different value over an average of time, and optionally to scale each resulting sequence of different average value. Scaling of strings of different average value may be necessary so that the output of scaled average values from processors 134 and 135 is in such a range that the outputs of multiplication elements 114 and 115 are useful for orienting the amplification amplifiers 116 and 117. [00056] In a typical implementation, the signal S1 output from processor 134 is a sequence of different scaled average values (each of these different scaled average values being a scaled average of the difference between current values of different P and Q signals) in a different time window). Signal S1 is a gain gain control signal for speechless channel 102, and is used to scale the independently generated gross gain gain control signal C1 for speechless channel 102. Similarly, in a typical implementation, the signal Output S2 from processor 135 is a sequence of different scaled average values (each of these different scaled average values being a scaled average of the difference between current values of T and Q signals in a different time window). Signal S2 is a magnification gain control signal for speechless channel 103, and is used to scale the independently generated gross gain gain control signal C2 for speechless channel 103. [00057] Scaling of the gross gain control signal C1 in response to the gain gain control signal S1 according to the invention can be carried out by multiplying (in element 114) each gross gain control value of the signal C1 through a corresponding one of the scaled average values different from the signal S1, to generate signal S3. Dimensioning of the gross gain control signal C2 in response to the gain gain control signal S2 according to the invention can be carried out by multiplying (in element 115) each gross gain control value of the signal C2 through a corresponding one of the scaled average values other than signal S2, to generate signal S4. [00058] Another embodiment (125 ') of the inventive system will be described with reference to figures 1A. In response to a multichannel audio signal comprising a speech channel 101 (center channel C) and two speechless channels 102 and 103 (right and left channels L and R), the system in figure 1A filters the speechless channels to generate a filtered multichannel audio signal comprising speech channel 101 and speechless channels 118 and 119 filtered (right and left L 'and R' channels filtered). [00059] In the system of figure 1 A (as in the figure of system 1), speechless channels 102 and 103 are affirmed for amplification amplifiers 117 and 116, respectively. In operation, amplifier amplifier 117 is directed a control signal S4 (which is indicative of a sequence of control values, and is thus also referred to as a sequence of control value S4) output from multiplication element 115, and amplifier amplifier 116 is directed control signal S3 (which is indicative of a sequence of control values, and is thus also referred to as sequence of control value S3) output from multiplication element 114. Elements 104, 105, 106, 107, 108, 109 (including elements 110, 120, 121, 111-1, 112-1, 111, and 112), 114, 115, 130, 131, 132, 134, and 135 of Figure 1A are identical to (and function identically as) the identical elements in figure 1 numbered, and the description above that will not be repeated. [00060] Figure 1 A system differs from Figure 1 in that a control signal VI (stated at the output of multiplier 214) is used to scale the control signal C1 (stated at the output of limiting element 111) instead of the control signal S1 (stated at the output of processor 134), and a control signal V2 (stated at the output of multiplier 215) is used to scale the control signal C2 (stated at the output of limiting element 112) instead of control signal S2 (stated on processor output 135). In figure 1A, scaling of the gross gain control signal C1 in response to the sequence of attenuation control values VI according to the invention is performed by multiplying (in element 114) each gross gain control value of the signal C1 through of a corresponding to one of the attenuation control values VI, to generate signal S3, and scaling of the gross gain control signal magnification C2 in response to the sequence of attenuation control values V2 according to the invention is performed by multiplying ( in element 115) each gross gain control value of signal C2 through one corresponding to one of the attenuation control values V2, to generate signal S4. [00061] To generate the sequence of attenuation control values VI, the Q signal (stated at the output of processor 131) is affirmed at a multiplier input 214, and the control signal S1 (stated at the output of processor 134) is affirmed for the other input of multiplier 214. The output of multiplier 214 is the sequence of attenuation control values VI. Each of the attenuation control values VI is one of the speech probability values determined using the Q signal, scaled through one corresponding to one of the attenuation control values S1. [00062] Similarly, to generate the sequence of attenuation control values V2, the Q signal (affirmed at the processor output 131) is affirmed at a multiplier input 215, and the control signal S2 (affirmed at the processor output 135) ) is affirmed for the other input of multiplier 215. The output of multiplier 215 is the sequence of attenuation control values V2. Each of the attenuation control values V2 is one of the speech probability values determined using the Q signal, scaled through one corresponding to one of the attenuation control values S2. [00063] The system of figure 1 (or that figure 1A) can be implemented in the software by a processor (for example, processor 501 of figure 5) that has been programmed to implement the operations of the system of figure 1 described (or 1A ). Alternatively, it can be implemented in hardware with connected circuit elements as shown in figure 1 (or 1A). [00064] In variations in the modality of figure 1 (or that of figure 1A), scaling of the gross gain gain control signal C1 in response to the gain gain control signal S1 (or VI) according to the invention ( to generate a magnification gain control signal to guide amplifier 116) can be performed in a non-linear manner. For example, such non-linear scaling can generate an amplification gain control signal (substitution signal S3) that does not cause the amplification through amplifier 116 (ie, the application of gain unit by amplifier 116 and thus none attenuation channel 103) when the current value of signal S1 (or VI) is below a limit, and makes the current value of the amplification gain control signal (substitution signal S3) equal to the current value of signal C1 (so that signal S1 (or VI) does not change the current value of C1) when the current value of signal S1 exceeds the limit. Alternatively, another linear and non-linear scaling of the C1 signal (in response to the inventive magnification gain control signal S1 or VI) can be performed to generate a magnification gain control signal to guide amplifier 116. For example, such scaling of signal C1 can generate an amplification gain control signal (substitution signal S3) that does not cause amplification through amplifier 116 (ie application of amplifier gain unit 116) when the current value of the signal S1 (or VI) is below a limit, and makes the current value of the amplification gain control signal (substitution signal S3) equal to the current value of signal C1 multiplied by the current value of signal S1 or VI (or some other value determined from this product) when the current value of signal S1 (or VI) exceeds the limit. [00065] Similarly, in the variations on the modality of figure 1 (or that of figure 1 A), scaling of the gross gain gain control signal C2 in response to the gain gain control signal S2 (or V2) according with the invention (to generate a magnification gain control signal to guide amplifier 117) can be performed in a non-linear manner. For example, such non-linear scaling can generate an amplification gain control signal (substitution signal S4) that does not cause amplification through amplifier 117 (ie application of amplifier gain unit 117 and thus no channel attenuation 102) when the current value of signal S2 (or V2) is below a limit, and makes the current value of the amplification gain control signal (substitution signal S4) equal to the current value of signal C2 ( so that signal S2 or V2 does not change the current value of C2) when the current value of signal S2 (or V2) exceeds the limit. Alternatively, another linear and nonlinear scaling of the C2 signal (in response to the inventive magnification gain control signal S2 or V2) can be performed to generate a magnification gain control signal to guide amplifier 117. For example, such scaling signal C2 can generate an amplification gain control signal (substitution signal S4) that does not cause amplification through amplifier 117 (ie application of amplifier gain unit 117) when the current value of signal S2 (or V2) is below a threshold, and makes the current value of the amplification gain control signal (replacement signal S4) equal to the current value of signal C2 multiplied by the current value of signal S2 or V2 (or some other value determined from this product) when the current value of signal S2 (or V2) exceeds the limit. [00066] Another embodiment (225) of the inventive system will be described with reference to figures 2. In response to a multichannel audio signal comprising a speech channel 101 (central channel C) and two speechless channels 102 and 103 (right channels and left L and R), figure 2 system filters of speechless channels to generate a filtered multichannel audio signal comprising speech channel 101 and speechless channels 118 and 119 filtered (right and left channels L 'and R' filtered). [00067] In the system of figure 2 (as in the system of figure 1), speechless channels 102 and 103 are affirmed for amplification amplifiers 117 and 116, respectively. In operation, amplifier amplifier 117 is directed a control signal S6 (which is indicative of a sequence of control values, and is thus also referred to as a sequence of control value S6) output from multiplication element 115, and amplifier amplifier 116 is directed control signal S5 (which is indicative of a sequence of control values, and is thus also referred to as sequence of control value S5) output from multiplication element 114. Elements 114, 115, 130, 131, 132, 134, and 135 of figure 2 are identical to (and function identically as) the identical elements of figure 1 numbered, and the description of which above will not be repeated. [00068] The system in figure 2 measures the signal strength on each of channels 101, 102, and 103 with a bank of power estimators, 201, 202, and 203. Unlike their counterparts in figure 1, each of the power estimators 201, 201, and 203 measured the distribution of the signal power across the frequency (ie power in each difference of a set of frequency bands of the relevant channel), resulting in a power spectrum instead of a unique number for each channel. The spectral resolution of each power spectrum ideally combines the spectral resolution of an intelligibility prediction model implemented through elements 205 and 206 (discussed below). [00069] The power spectrum is fed into comparison circuit 204. The purpose of circuit 204 is to determine the attenuation to be applied to each speechless channel to ensure that the signal on the speechless channel does not reduce the intelligibility of the signal on the channel of speech to be less than a predetermined criterion. This functionality is achieved by employing an intelligibility prediction circuit (205 and 206) that predicts speech intelligibility from the power spectrum of the speech channel signal (201) and signals from the speechless channel (202 and 203). The intelligibility prediction circuits 205 and 206 can implement an adequate intelligibility prediction model according to design choices and compensations. Examples are the Speech Intelligence Index as specified in ANSI S3. 5- 1997 ("Methods for Calculating the Speech Intelligibility Index") and the speech model of Muesch and Buus Recognition Sensitivity ("Using statistical decision theory to predict speech intelligibility. I. Structure model" Jornal da Sociedade America's Acoustics, 2001, Vol. 109, p 2896- 2909). It is clear that the output of the intelligibility prediction model is meaningless when the signal in the speech channel is different from the other speech. Despite this, in what follows the output of the intelligibility prediction model will be referred to as the predicted speech intelligibility. The perceived error was responsible for the subsequent processing through the dimensioning of the output gain values from the comparison circuit 204 with parameters S1 and S2, each of which is related to the probability of the signal in the speech channel being indicative of speech. [00070] The intelligibility prediction models have in common the fact of predicting either increased or unchanged speech intelligibility as the result of reducing the level of the speechless signal. Continuing in the process flow of figure 2, the comparison circuits 207 and 208 compare the predicted intelligibility with a predetermined value criterion. If element 205 determines that the level of the speechless channel 103 is so low that the predicted intelligibility exceeds the criterion, a gain parameter, which is initialized at 0 dB, is retrieved from circuit 209 and supplied to circuit 211 as the output C3 of comparison circuit 204. If element 206 determines that the level of the speechless channel 102 is so low that the predicted intelligibility exceeds the criterion, a gain parameter, which is initialized at 0 dB, is recovered from the circuit 210 and supplied to circuit 212 as the output C4 of comparison circuit 204. If element 205 or 206 determines that the criterion is not known, the gain parameter (in the relevant one of elements 209 and 210) is decreased by a fixed quantity and the intelligibility prediction is repeated. A suitable step size to decrease the gain is 1 dB. The repetition as just described continues until the predicted intelligibility meets or exceeds the criterion value. [00071] It is clear that it is possible that the signal in the speech channel is such that the intelligibility criterion cannot be achieved even in the absence of a signal in the speechless channel. An example of such a situation is a speech signal of very low level or with severely restricted bandwidth. If this happens at a point it will be achieved where any further reduction in gain applied to the speechless channel does not affect the expected speech intelligibility and the criterion is never known. In such a condition, the cycle formed by elements 205, 207, and 209 (or elements 206, 208, and 210) continues indefinitely, and additional logic (not shown) can be applied to stop the cycle. A particularly simple example of such a logic is to count the number of repetitions and exit the cycle once each predetermined number of repetitions has been exceeded. [00072] Scaling of the gross gain gain control signal C3 in response to the gain gain control signal S1 according to the invention can be carried out by multiplying (in element 114) each gross gain control value of the C3 signal by corresponding to one of the scaled average values other than signal S1, to generate signal S5. Dimensioning of the gross gain gain control signal C4 in response to the gain gain control signal S2 according to the invention can be carried out by multiplying (in element 115) each gross gain control value of the C4 signal through a corresponding to one of the scaled average values other than signal S2, to generate signal S6. [00073] The system of figure 2 can be implemented in the software by a processor (for example, processor 501 in figure 5) that has been programmed to implement the operations described in the system in figure 2. Alternatively, it can be implemented in hardware with connected circuit elements as shown in figure 2. [00074] In the variations in the modality of figure 2, scaling of the gross gain gain control signal C3 in response to the gain gain control signal S1 according to the invention (to generate a gain gain control signal for orient amplifier 116) can be performed in a non-linear manner. For example, such non-linear scaling can generate an amplification gain control signal (substitution signal S5) that does not cause the amplification through amplifier 116 (ie application of amplifier gain unit 116 and thus no channel attenuation 103) when the current value of signal S1 is below a limit, and makes the current value of the amplification gain control signal (substitution signal S5) equal to the current value of signal C3 (so that signal S1 does not change the current value of C3) when the current value of signal S 1 exceeds the limit. Alternatively, another linear and non-linear scaling of signal C3 (in response to the inventive magnification gain control signal S1) can be performed to generate a magnification gain control signal to guide amplifier 116. For example, such scaling of the signal C3 can generate an amplification gain control signal (substitution signal S5) that does not cause amplification through amplifier 116 (ie application of amplifier gain unit 116) when the current value of signal S1 is below a limit, and makes the current value of the amplification gain control signal (substitution signal S5) equal to the current value of signal C3 multiplied by the current value of signal S1 (or some other value determined from this product ) when the current value of signal S1 exceeds the limit. [00075] Similarly, in the variations in the modality of figure 2, scaling of the gross gain gain control signal C4 in response to the gain gain control signal S2 according to the invention (to generate a gain gain control signal) magnification to orient amplifier 117) can be performed in a non-linear manner. For example, such non-linear scaling can generate a magnification gain control signal (substitution signal S6) that does not cause amplification through amplifier 117 (ie application of amplifier gain unit 117 and thus no channel attenuation 102) when the current value of signal S2 is below a limit, and makes the current value of the amplification gain control signal (substitution signal S6) equal to the current value of signal C4 (so that signal S2 does not change the current value of C4) when the current value of signal S2 exceeds the limit. Alternatively, another linear and non-linear scaling of the C4 signal (in response to the inventive magnification gain control signal S2) can be performed to generate a magnification gain control signal to guide amplifier 117. For example, such a signal scaling C4 can generate an amplification gain control signal (substitution signal S6) that does not cause amplification through amplifier 117 (ie application of amplifier gain unit 117) when the current value of signal S2 is below of a limit, and makes the current value of the amplification gain control signal (substitution signal 56) equal to the current value of signal C4 multiplied by the current value of signal S2 (or some other value determined from this product) when the current value of signal S2 exceeds the limit. [00076] Another embodiment (225 ') of the inventive system will be described with reference to figures 2A. In response to a multichannel audio signal comprising a speech channel 101 (center channel C) and two speechless channels 102 and 103 (right and left channels L and R), the system in figure 2A filters the speechless channels to generate a filtered multichannel audio signal comprising speech channel 101 and speechless channels 118 and 119 filtered (right and left L 'and R' channels filtered). [00077] In the system of figure 2A (as in the system of figure 2), speechless channels 102 and 103 are affirmed for amplification amplifiers 117 and 116, respectively. In operation, amplifier amplifier 117 is directed a control signal S6 (which is indicative of a sequence of control values, and is thus also referred to as a sequence of control value S6) output from multiplication element 115, and amplifier amplifier 116 is the directed control signal S5 (which is indicative of a sequence of control values, and is thus also referred to as sequence of control value S5) output from multiplication element 114. Elements 201, 202 , 203, 204, 114, 115, 130, and 134 of figure 2A are identical to (and the function identically as) the identically numbered elements of figure 2, and the description of which above will not be repeated. [00078] Figure 2A system differs from that of Figure 2 in two main aspects. First, the system is configured to generate (ie, derive) a "derived" speechless channel (L + R) from two individual speechless channels (102 and 103) from an audio input signal, and to determine attenuation control values (V3) in response to this speechless derived channel. In contrast, the system in figure 2 determines attenuation control values S1 in response to a speechless channel (channel 102) from an input audio signal and determines attenuation control values S2 in response to another speechless channel (channel 103) of an input audio signal. In operation, the system of figure 2A attenuates each channel without speaking an incoming audio signal (each of channels 102 and 103) in response to the same definition of attenuation control values V3. In operation, the system of Figure 2 attenuates the speechless channel 102 of an input audio signal in response to the attenuation control values S2, and attenuates the speechless channel 103 of an input audio signal in response to the defined difference in attenuation control values (S1 values). [00079] The system of figure 2A includes addition element 129 whose inputs are coupled to receive speechless channels 102 and 103 from an audio input signal. The speechless derived channel (L + R) is affirmed at the output of element 129. Speech probability processing element 130 states speech probability signal P in response to the speechless derived channel L + R from the element 129. In figure 2A, signal P is indicative of a sequence of speech probability values to the speech-derived channel. Typically, the speech probability signal P of figure 2A is a monotonous value related to the probability that the signal in the speechless channel is speech. Speech probability signal Q (generated through processor 131) of figure 2A is identical to the speech probability signal Q of figure 2 mentioned above. [00080] A second major respect in which the system in figure 2A differs from that in figure 2 is as follows. In figure 2A, the control signal V3 (stated at the output of multiplier 214) is used (instead of the control signal S1 stated at the output of processor 134) to scale the gross gain gain control signal C3 (stated at element output 211), and the control signal V3 is also used (instead of the control signal S2 stated at the output of processor 135 of figure 2) to scale the gross gain gain control signal C4 (stated at the output of element 212). In figure 2A, scaling of the gross gain gain control signal C3 in response to a sequence of attenuation control values indicated by the signal V3 (for referred to as attenuation control values V3) according to the invention is performed by multiplying (in element 114) each value of gross gain control of signal C3 through one corresponding to one of the values of attenuation control V3, to generate signal S5, and dimensioning of the gross gain control signal of amplification C4 in response to the sequence of attenuation control values V3 according to the invention is performed by multiplying (in element 115) each control value of gross gain of signal C4 by one corresponding to one of the attenuation control values V3, to generate signal S6. [00081] In operation, figure 2A system generates the sequence of attenuation control values V3 as follows. The speech probability signal Q (stated at the output of processor 131 of figure 2A) is affirmed at a multiplier input 214, and the attenuation control signal S1 (stated at the output of processor 134) is affirmed for the other input of multiplier 214 The output from multiplier 214 is the sequence of attenuation control values V3. Each of the attenuation control values V3 is one of the speech probability values determined through the Q signal, scaled through one corresponding to one of the attenuation control values S1. [00082] Another embodiment (325) of the inventive system will be described with reference to figures 3. In response to a multichannel audio signal comprising a speech channel 101 (central channel C) and two speechless channels 102 and 103 (right channels and left L and R), figure 3 system filters of speechless channels to generate a multichannel filtered audio output signal comprising speech channel 101 and speechless channels 118 and 119 filtered (right and left channels L 'and R' filtered). [00083] In the system of figure 3, each of the signals in the three input channels is divided into their spectral components through filter bank 301 (for channel 101), filter bank 302 (for channel 102), and filter bank 303 (for channel 103). The spectral analysis can be obtained with the N time domain of the filter bank channel. According to one modality, each partitions of the filter bank of the frequency range in 1/3-octave bands or resemble the presumed filtering to occur inside the human ear. The fact that the signal output from each filter bank consists of N subsignals is illustrated by the use of heavy lines. [00084] In the system of figure 3, the frequency components of the signals in speechless channels 102 and 103 are stated for amplification amplifiers 117 and 116, respectively. In operation, amplifier amplifier 117 is directed a control signal S8 (which is indicative of a sequence of control values, and is thus also referred to as a sequence of control value S8) output from the multiplication element 115 ', and the amplifier amplifier 116 is the directed control signal S7 (which is indicative of a sequence of control values, and is thus also referred to as a sequence of control value S7) output from the multiplication element 114 '. Elements 130, 131, 132, 134, and 135 of figure 3 are identical to (and function identically as) the identical elements of numbered figure 1, and the description of which above will not be repeated. [00085] The process in figure 3 can be recognized as a filial process later. Then the signal path shown in figure 3, the N subsignals generated in bank 302 for speechless channel 102 are all scaled by a member of a set of N gain values through amplification amplifier 117, and the N subsignals generated in the bank 303 for speechless channel 103 are all scaled by a member of a set of gain values N through amplification amplifier 116. The derivation of these gain values will be described later. Then, the scaled subsignals are recombined into a single audio signal. This can be done through simple addition (through the summation circuit 313 for channel 102 and through the summation circuit 314 for channel 103). Alternatively, a filter bank overview that is combined with the filter bank analysis can be used. This process results in the modification of the speechless signal R '(118) and the modification of the speechless signal L' (119). [00086] Now describing the path of the lateral branch of the process of figure 3, each output filter bank is made available to a corresponding bank of power estimators N (304 305, and 306). The resulting power spectrum for channels 101 and 102 serves as inputs for an optimization of circuit 307 that outputs a dimensional gain vector N C6. The resulting power spectra for channels 101 and 103 serve as inputs to an optimization of circuit 308 that outputs a dimensional gain vector N C5. The optimization employs both an intelligibility prediction circuit (309 and 310) and a volume calculation circuit (311 and 312) to find the gain vector that maximizes the volume of each speechless channel while maintaining a predetermined level of predicted intelligibility in the speech signal channel 101. Suitable models for predicting intelligibility have been discussed with reference to figures 2. The volume calculation circuits 311 and 312 can implement a suitable volume forecasting model according to design choices and compensations. Examples of suitable models are American National Standards ANSI S3. 4-2007 "Procedure for Computing the Volume of Stable Sounds" and German Standards DIN 45631 "Calculation of the level of volume and intensity of the Gerauschspektrum". [00087] Depending on the computational resources available and the restrictions imposed, the form and complexity of the optimization of the circuits (307, 308) can vary greatly. According to an iterative modality, the limited multidimensional optimization of N of free parameters is used. Each parameter represents the gain applied to one of the frequency bands of the speechless channel. Standard techniques, such as then the steepest gradient in the dimensional search space of N can be applied to find the maximum. In another modality, a less demanding computational approach restricts the gain vs. frequency to be members of a smaller set of possible gain vs. frequency, such as a set of different spectral gradients or platform filters. With this additional constraint the optimization of the problem can be reduced to a smaller number of dimensional optimization. In another additional modality, an exhaustive search is made on a much smaller set of possible gain functions. This latter approach is perhaps particularly desirable in real-time applications where a constant computational load and search speed are desirable. [00088] Those of ordinary skill in the art will easily recognize additional restrictions that may be imposed on optimization according to the modalities of the present additional invention. An example is the limitation of the volume of the channel without speech modified to not be greater than the volume before the modification. Another example is the imposition of a limit on the difference in gain between adjacent frequency bands in order to limit the potential for temporal serration in the reconstruction of the filter bank (313, 314) or to reduce the possibility of questionable tone changes. Desirable limitations depend both on the technical implementation of the filter bank and on the chosen compensation between intelligibility, improvement and timbre modification. For clarity of illustration, these restrictions are omitted from figure 3. [00089] Dimensioning of the vector of gain control of dimensional enlargement of gross N C6 in response to the signal of gain control of magnification S2 according to the invention can be performed by multiplying (in element 115 ') each value of gross gain control of the vector C6 through a corresponding to one of the scaled average values different from the signal S2, to generate the dimensional gain gain control vector of N S8. Dimensioning of the gross magnification gain control vector of N C5 in response to the magnification gain control signal S1 according to the invention can be performed by multiplying (in element 114 ') each gross gain control value of the C5 vector through one corresponding to one of the scaled average values different from the signal S1, to generate N magnification gain control vector of S7. [00090] The system in figure 3 can be implemented in the software by a processor (for example, processor 501 in figure 5) that has been programmed to implement the operations described in the system in figure 3. Alternatively, it can be implemented in hardware with connected circuit elements as shown in figure 3. [00091] In the variations of the modality of figure 3, scaling of the gross gain gain control vector C5 in response to the gain gain control signal S 1 according to the invention (to generate a gain gain control vector to guide amplifier 116) can be performed in a non-linear manner. For example, such non-linear scaling can generate a magnification gain control vector (substitution vector S7) that does not cause amplification through amplifier 116 (ie, application of amplifier gain unit 116 and thus no channel attenuation 103) when the current value of the signal S1 is below a limit, and causes the current values of the magnification gain control vector (substitution vector S7) to match the current values of the vector C5 (so that signal S1 does not change the current values of C5) when the current value of signal S1 exceeds the limit. Alternatively, another linear or nonlinear scaling of the vector C5 (in response to the inventive magnification gain control signal S1) can be performed to generate a magnification gain control vector to guide amplifier 116. For example, such scaling of the vector C5 can generate a magnification gain control vector (substitution vector S7) that does not cause amplification through amplifier 116 (ie application of amplifier gain unit 116) when the current value of signal S1 is below a limit, and makes the current value of the magnification gain control vector (substitution vector S7) equal to the current value of vector C5 multiplied by the current value of signal S1 (or some other value determined from this product ) when the current value of signal S1 exceeds the limit. [00092] Similarly, in the variations of the modality of figure 3, scaling of the gross gain gain control vector C6 in response to the gain gain control signal S2 according to the invention (to generate a gain gain control vector) magnification to orient amplifier 117) can be performed in a non-linear manner. For example, such non-linear scaling can generate a magnification gain control vector (substitution vector S8) that does not cause the amplification through amplifier 117 (that is, application of the gain unit per amplifier 117 and thus no channel attenuation 102) when the current value of signal S2 is below a limit, and causes the current values of the magnification gain control vector (substitution vector S8) to match the current values of vector C6 (so that signal S2 does not change the current values of C6) when the current value of signal S2 exceeds the limit. Alternatively, another linear or nonlinear scaling of the vector C6 (in response to the inventive magnification gain control signal S2) can be performed to generate a magnification gain control vector to guide the amplifier 117. For example, such scaling of the vector C6 can generate a magnification gain control vector (substitution vector S8) that does not cause amplification through amplifier 117 (ie application of amplifier gain unit 117) when the current value of signal S2 is below a threshold, and makes the current value of the magnification gain control vector (substitution vector S8) equal to the current value of vector C6 multiplied by the current value of signal S2 (or some other value determined from this product ) when the current value of signal S2 exceeds the limit. [00093] It will be apparent to those of ordinary skill in the art from this description how figures 1, 1A, 2, 2A, or 3 system (and variations on any of them) can be modified to filter an audio input signal from multichannel having a speech channel and any number of speechless channels. A magnification amplifier (or equivalent software) would be provided for each speechless channel, and a magnification gain control signal would be generated (for example, scaling a raw magnification gain control signal) to guide each amplifier. (or equivalent software). [00094] As described, the system of figure 1, 1A, 2, 2A, or 3 (and each of the many variations in this) is operable to carry out modalities of the inventive method for filtering a multichannel audio signal having a speech channel and at least one speechless channel to improve speech intelligibility determined by the signal. In a first class of such modalities, the method includes steps of: (a) determining at least one attenuation control value (for example, signal S1 or S2 of figure 1, 2, or 3, or signal V1, V2, or V3 of figure 1A or 2A) indicative of a measure of similarity between related speech content determined by the speech channel and related speech content determined through at least one speechless channel of the audio signal; and (b) attenuation of at least one channel without speech of the audio signal in response to at least one attenuation control value (for example, in element 114 and amplifier 116, or element 115 and amplifier 117, of figure 1, 1A , 2, 2A, or 3). [00095] Typically, the attenuation step comprises dimensioning a raw attenuation control signal (for example, magnification gain control signal C1 or C2 of figure 1 or 1A, or signal C3 or C4 of figure 2 or 2A) for the speechless channel in response to at least one attenuation control value. Preferably, the speechless channel is attenuated in order to improve speech intelligibility determined by the speechless channel undesirably attenuating speech reinforcement content determined by the speechless channel. In some embodiments in the first class, step (a) includes an attenuation control signal (for example, signal S1 or S2 of figure 1, 2, or 3, or signal V1, V2, or V3 of figure 1A) or 2A) indicative of a sequence of attenuation control values, each of the indicative attenuation control values of a measure of similarity between related speech content determined by the speech channel and related speech content determined through at least one channel without speaks of the audio signal at a different time (for example, at a different time interval), and step (b) includes steps of: scaling a magnification gain control signal (for example, signal C1 or C2 in figure 1 or 1A, or signal C3 or C4 of figure 2 or 2A) in response to the attenuation control signal to generate a scaled gain control signal (for example, signal S3 or S4 of figure 1 or 1 A, or signal S5 or S6 of figure 2 or 2A), and applying the scaled gain control signal p to attenuate the speechless channel (for example, affirming the gain control signal scaled to the amplification circuit 116 or 117, of figure 1, 1A, 2, or 2A, to control the attenuation of at least one speechless channel by the circuit magnification). For example, in some such modalities, step (a) includes a step of comparing a first sequence of indicative related speech characteristics (for example, Q sign in figure 1 or 2) indicative of the related speech content determined by the speech channel for a second sequence of indicative related speech characteristics (for example, P signal of figure 1 or 2) indicative of the related speech content determined by the speechless channel to generate the attenuation control signal, and each of the attenuation control values indicated by the attenuation control signal is indicative of a measure of similarity between the first sequence of indicative related speech characteristics and the second sequence of indicative related speech characteristics at a different time (for example, at a different time interval). In some embodiments, each attenuation control value is a gain control value. [00096] In some modalities in the first class, each attenuation control value is monotonous related to the probability that the speechless channel is indicative of speech reinforcing content that reinforces the intelligibility (or other perceived quality) of determined speech content through the speech channel. In some other modalities in the first class, each attenuation control value is monotonous related to an enhanced speech value expected from the speechless channel (for example, a measure of probability that the speechless channel is indicative of speech reinforcement content, multiplied by a measure of perceived quality of improvement in which the speech reinforcement content determined by the speechless channel would be provided for determined speech content by the multichannel signal). For example, where step (a) includes a comparison step (for example, in element 134 or 135 of figure 1 or figure 2) the first sequence of related speech characteristics indicative of related speech content determined by the speech channel for a second sequence of indicative related speech characteristics indicative of related speech content determined by the speechless channel, the first sequence of indicative related speech characteristics can be a sequence of speech probability values, each indication the probability at a different time ( for example, at a different time interval) where the speech channel is indicative of speech (rather than audio content other than speech), and the second sequence of indicative related speech characteristics may also be a sequence of speech probability values, each indication of the probability at a different time (for example, at a different time interval) in which and the speechless channel is indicative of speech. [00097] As described, the system of figure 1, 1A, 2, 2A, or 3 (and each of many variations thereof) is also operable to perform a second class of modalities of the inventive method for filtering a multichannel audio signal having a speech channel and at least one speechless channel to improve speech intelligibility determined by the signal. In the second class of modalities, the method includes the steps of: (a) comparing a characteristic of the speech channel and a characteristic of the speechless channel to generate at least one attenuation value (for example, values determined using the signal C1 or C2 of figure 1, or through signal C3 or C4 of figure 2, or through signal C5 or C6 of figure 3) to control the speechless attenuation channel relative to the speech channel; and (b) adjusting at least one attenuation value in response to at least one speech improvement probability value (for example, signal S1 or S2 of figure 1, 2, or 3) to generate at least one attenuation value adjusted (for example, values of the determined signal S3 or S4 in figure 1, or through signal S5 or S6 in figure 2, or through signal S7 or S8 in figure 3) to control the speechless attenuation channel relative to the speaks. Typically, the adjustment step is or includes scaling (for example, in element 114 or 115 of figure 1, 2, or 3) each said attenuation value in response to said speech improvement probability value to generate said value attenuation set. Typically, each speech improvement probability value is indicative of (for example, monotone related to) a probability that the speechless channel is indicative of speech reinforcing content (content that reinforces intelligibility or other perceived quality of determined content speech through the speech channel). In some modalities, the speech improvement probability value is indicative of an enhanced speech value expected from the speechless channel (for example, a measure of probability that the speechless channel is indicative of speech reinforcement content multiplied by a measure of perceived quality of improvement in which the speech reinforcement content determined by the speechless channel would be supplied to the determined speech content by the multichannel audio signal). In some modalities in the second classes, the speech improvement probability value is a sequence of comparison values (for example, difference values) determined by a method including a comparison step, a first sequence of related speech characteristics indicative of related speech content determined by the speech channel for a second sequence of related speech characteristics indicative of related speech content determined by the speechless channel, and each of the comparison values is a measure of similarity between the first sequence of speech characteristics indicative related and the second sequence of indicative related speech characteristics at a different time (for example, at a different time interval). In typical modalities in the second classes, the method also includes the attenuation step the speechless channel (for example, on amplifier 116 or 117 of figure 1, 2, or 3) in response to at least one adjusted attenuation value. Step (b) may comprise dimensioning at least one attenuation value (for example, each attenuation value determined by the signal C1 or C2 of figure 1), or another attenuation value determined by a magnification gain control signal or other gross attenuation control signal) in response to at least one speech improvement probability value (for example, the corresponding value determined using the signal S1 or S2 in figure 1). [00098] In the operation of figure 1 system to perform a modality in the second classes, each attenuation value determined by the signal C1 or C2 is a first factor indicating the amount of speechless attenuation channel necessary to limit the power ratio of the signal in the speechless channel to the signal strength in the speech channel not to exceed one of a predetermined limit, scaled by a second monotonous factor related to the probability of the speech channel being indicative of speech. Typically, the adjustment step in these modalities is (or includes) dimensioning each C1 or C2 attenuation value by a speech improvement probability value (determined using the S1 or S2 signal) to generate an adjusted attenuation value (determined using the signal S3 or S4), where the speech improvement probability value is a monotonous factor related to one of: a probability that the speechless channel is indicative of speech reinforcing content (content that reinforces intelligibility or other perceived quality determined speech content by the multichannel signal), and an enhanced speech value expected from the speechless channel (for example, a measure of the probability that the speechless channel is indicative of speech reinforcement content multiplied by a measure of perceived quality improvement in which the speech reinforcement content in the speechless channel would be supplied to the determined speech content by the multichannel signal). [00099] In the operation of the system in figure 2 to carry out a modality in the second classes, each attenuation value determined through the signal C3 or C4 is a first factor indicating an amount (for example, the minimum amount) of attenuation channel without enough speech to make the predicted speech intelligibility determined by the speech channel in the presence of content determined by the speechless channel to exceed a predetermined threshold value, scaled by a second monotonous factor related to the probability of the speech channel being indicative of speech. Preferably, the predicted speech intelligibility determined by the speech channel in the presence of content determined by the speechless channel is determined according to a predicted model of intelligibility on a psycho-acoustic basis. Typically, the adjustment step in these modalities is (or includes) dimensioning each said attenuation value by a said speech improvement probability value (determined using the S1 or S2 signal) to generate an adjusted attenuation value (determined using the signal S5 or S6), where the speech improvement probability value is a monotonous factor related to one of: a probability that the speechless channel is indicative of speech reinforcement content, and an enhanced speech value expected from the speechless channel. speaks. [000100] In the operation of the system of figure 3 to carry out a modality in the second classes, each attenuation value determined by the signal C1 or C2 is determined in stages including determining (in the element 301, 302, or 303) an indicative power spectrum of power as a function of frequency, for each speech channel 101 and speechless channels 102 and 103, and executing the frequency domain determination of the attenuation value, thereby determining attenuation as a frequency function to be applied to components of channel frequency without speech. [000101] In a class of modalities, the invention is a method and system for improving speech determined by a multichannel audio input signal. In some such embodiments, the inventive system includes an analysis module or subsystem (for example, elements 130-135, 104-109, 114, and 115 in figure 1, or elements 130-135, 201-204, 114, and 115 of figure 2) configured to analyze the multichannel signal input to generate attenuation control values, and an attenuation subsystem (for example, amplifiers 116 and 117 of figure 1 or figure 2). The attenuation subsystem includes an amplification circuit (driven by at least some of the attenuation control values) coupled and configured to apply attenuation (amplification) to each channel without speaking an input signal to generate a filtered audio output signal. The amplification circuit is a control value directed in the sense that the attenuation that applies to speechless channels is determined by current values of the control values. [000102] In some embodiments, a ratio of speech channel (for example, center channel) power to speechless channel (for example, side channel and / or rear channel) power is used to determine how much amplification (attenuation) should be be applied to each channel without speech. For example, in figure 1, the gain modality applied through each of the amplification amplifiers 116 and 117 is reduced in response to a decrease in a gain control value (output from element 114 or element 115) that is power decrease (within the limits) of the speech channel 101 relative to the power of a speechless channel (left channel 102 or right channel 103) determined in the analysis module (that is, an amplification amplifier attenuates a speechless channel however relative to the speech channel when the power speech channel decreases (within limits) relative to the power of the speechless channel) assuming that no change in probability (as determined in the analysis module) in which the speechless channel includes content from speech reinforcement that reinforces determined speech content through the speech channel. [000103] In some alternative modalities, a modified version of the analysis module of figure 1 or figure 2 individually processes each of one or more frequency sub-bands of each channel of an input signal. Specifically, the signal on each channel can be passed through a bandpass filter bank, producing three sets of n subbands: {Li, L2, Ln}> {Ci, C2, Cn}, and {%, R2 Rn}. Combining sub-bands are passed to n instances of the analysis module of figure 1 (or figure 2), and the filtered subsites (the outputs of the amplification amplifiers for the speechless channels, and the subscripts of the unfiltered speech channel) are recombined by summation circuits to generate the filtered multichannel audio output signal. In order to perform the operations carried out by element 109 of figure 1 in each subband, a separate limit value (corresponding to the limit value of element 109) can be selected for each subband. A good choice is a set in which it is proportional to the average number of speech signals carried in the corresponding frequency region, that is, bands at the extremes of the frequency spectrum are assigned lower limits than corresponding frequency bands of the dominant speech. This implementation of the invention can offer a very good trade-off between computational complexity and performance. [000104] Figure 4 is a block diagram of a system 420 (an audio configured DSP) that has been configured to perform a modality of the inventive method. System 420 includes programmable DSP circuit 422 (an active speech enhancement module of system 420) coupled to receive a multi-channel audio input signal. For example, speechless channels Lin and Rin of the signal may correspond to channels 102 and 103 of an input signal described with reference to figures 1, 1A, 2, 2A, and 3, the signal may also include additional speechless channels (for example, example, left rear and right rear channels), and the speech channel Cin of the signal can correspond to channel 101 of an input signal described with reference to figures 1, 1A, 2, 2A, and 3. Circuit 422 is configured in response to controlling the data from the control interface 421 to carry out an inventive method modality, to generate an enhanced multichannel speech output audio signal in response to the audio input signal. For the program system 420, suitable software is asserted from an external processor for control interface 421, and interface 421 states in adequate data control response for circuit 422 to configure circuit 422 to perform the inventive method. [000105] In operation, a DSP audio that has been configured to perform speech enhancement according to the invention (for example, system 420 of figure 4) is coupled to receive an N-channel audio input signal, and the DSP typically performs a variety of operations on incoming audio (or a processed version of it) in addition to (as well as) speech enhancement. For example, system 420 of figure 4 can be implemented to perform other operations (on the output of circuit 422) in the processing subsystem 423. According to several modalities of the invention, a DSP audio is operable to perform a modality of the inventive method after being configured (for example, programmed) to generate an output audio signal in response to an input audio signal by executing the method on an audio input signal. [000106] In some embodiments, the inventive system is or includes a general purpose processor coupled to receive or generate input data indicative of a multichannel audio signal. The processor is programmed with software (or firmware) and / or otherwise configured (for example, in response to controlling data) to perform any of a variety of input data operations, including an inventive method modality. The computer system of figure 5 is an example of such a system. Figure 5 of the system includes general purpose processor 501 which is programmed to perform any of a variety of operations on the input data, including a modification of the inventive method. [000107] The computer system of figure 5 also includes input device 503 (for example, a mouse and / or keyboard) attached to processor 501, storage medium 504 attached to processor 501, and display device 505 attached to processor 501. Processor 501 is programmed to implement the inventive method in response to instructions and data entered by user manipulation of the input device 503. Computer readable storage medium 504 (for example, an optical disc or other tangible object) has code computer stored therein that is suitable for programming processor 501 to carry out a modality of the inventive method. In operation, processor 501 executes the computer code for processing data indicative of a multi-channel audio input signal according to the invention to generate output data indicative of a multi-channel audio output signal. [000108] The system described above in figures 1, 1A, 2, 2A, or 3 could be implemented in the general purpose processor 501, with input signal channels 101, 102, and 103 being data indicative of the center (speech) and left and right (speechless) input audio channels (for example, from a surround sound signal), and signal from output channels 118 and 119 being output data indicative of left and right speech output channels emphasized ( for example, an emphasized speech surround signal). A conventional digital to analog converter (DAC) could operate from the data output to generate analog versions of output audio channel signals for reproduction through physical speakers. [000109] Aspects of the invention are a computer system programmed to perform any modality of the inventive method, and a computer-readable medium that stores computer-readable code for implementing any modality of the inventive method. [000110] While specific embodiments of the present invention and applications of the invention have been described here, it will be apparent to those of ordinary skill in the art that many variations on the embodiments and applications described herein are possible without departing from the scope of the invention described and claimed here. It should be understood that while certain forms of the invention have been shown and described, the invention is not to be limited to the specific described and shown modalities or the specific methods described.
权利要求:
Claims (15) [0001] 1. Method to filter a multichannel audio signal having a speech channel (101) and at least one speechless channel (102,103) to improve speech intelligibility determined by the signal, characterized by the fact that it comprises the steps of: (a ) determine at least one attenuation control value (S1, S2, V3) indicative of a measure of similarity between related speech content determined by the speech channel (101) and related speech content determined through at least one speechless channel (102,103) of the multichannel audio signal; and (b) attenuate at least one speechless channel (102,103) of the multichannel audio signal in response to at least one attenuation control value. [0002] 2. Method, according to claim 1, characterized by the fact that step (b) comprises the step of dimensioning a raw attenuation control signal (C1, C2, C3, C4) for the speechless channel (102,103) in response to at least one attenuation control value (S1, S2, V3). [0003] 3. Method, according to claim 1, characterized by the fact that step (a) comprises the step of generating an attenuation control signal (S1, S2, V3) indicative of a sequence of attenuation control values, each of the attenuation control values of a measure of similarity at a different time between related speech content determined by the speech channel and related speech content determined through at least one speechless channel (102,103) of the multichannel audio signal , and step (b) comprises the steps of: scaling a magnification gain control signal (C1, C2, C3, C4) in response to the attenuation control signal to generate a scaled gain control signal (S3, S4, S5, S6); and applying the scaled gain control signal to attenuate at least one speechless channel (102,103) of the multichannel audio signal. [0004] 4. Method, according to claim 3, characterized by the fact that step (a) comprises the step of comparing a first sequence of related speech characteristics (Q) indicative of the content of the related speech determined by the speech channel, a a second sequence of related speech characteristics (P or T) indicative of the content of the related speech determined through at least one speechless channel (102,103) of the multichannel audio signal to generate the attenuation control signal, and each of the values attenuation control signal indicated by the attenuation control signal (S1, S2, V3) is indicative of a measure of similarity at a different time between the first sequence of related speech characteristics and the second sequence of related speech characteristics. [0005] 5. Method to filter a multichannel audio signal having a speech channel (101) and at least one speechless channel (102,103) to improve speech intelligibility determined by the signal, characterized by the fact that it comprises the steps of: (a ) compare a characteristic of the speech channel (101) and a characteristic of the speechless channel (102,103) to generate at least one attenuation value (C1, C2, C3, C4, C5, C6) to control the attenuation of the speechless channel (102,103) related to the speech channel; and (b) adjusting at least one attenuation value in response to at least one speech improvement probability value (S1, S2, V3) to generate at least one adjusted attenuation value (S3, S4, S5, S6, S7 or S8) to control the attenuation of the speechless channel (102,103) relative to the speech channel (101). [0006] 6. Method, according to claim 5, characterized by the fact that step (b) comprises the step of dimensioning each attenuation value (C3, C4) in response to a speech improvement probability value (V3) for generate at least one adjusted attenuation value (S5, S6). [0007] 7. Method, according to claim 5, characterized by the fact that at least one speech improvement probability value is a sequence of comparison values, and the method comprises the step of: determining the sequence of comparison values through comparison of a first sequence of related speech characteristics (Q) indicative of related speech content determined by the speech channel to a second sequence of related speech characteristics (P or T) indicative of the related speech content determined by the speechless channel (102,103) where each of the comparison values is a measure of similarity at a different time between the first sequence of related speech characteristics and the second sequence of related speech characteristics. [0008] 8. Method, according to claim 5, characterized by the fact that each attenuation value (C1, C2, C3, C4) generated in step (a) is a first factor indicating an amount of attenuation of the speechless channel ( 102,103) necessary to limit the ratio of signal strength in the speechless channel to the signal strength in the speech channel (101) so as not to exceed one of a predetermined limit, scaled by a second factor monotonously related to the probability of the speech channel being speech indicative. [0009] 9. Method, according to claim 5, characterized by the fact that each attenuation value (C1, C2, C3, C4) generated in step (a) is a first factor indicating an amount of attenuation of the speechless channel ( 102,103) enough to make the predicted speech intelligibility determined by the speech channel (101) in the presence of content determined by the speechless channel (102,103) to exceed a predetermined threshold value, scaled by a second factor monotonically related to the channel probability of speech being indicative of speech. [0010] 10. System to improve speech determined by a multichannel audio input signal (101) and at least one speechless channel (102,103) characterized by the fact that it includes: an analysis subsystem (104,105,106,107,108,109,130,131, 132,134,135,114,115 or 201,202,203,204,130,131,134,214,114,115) configured for analyze the multichannel audio input signal to generate attenuation control values (S3, S4 or S5, S6), where each of the attenuation control values is indicative of a measure of similarity between related speech content determined by the channel speech and related speech content determined through at least one speechless channel of the input signal; and an attenuation subsystem (117,118) configured to apply submerged attenuation, directed by at least some of the attenuation control values, to at least one speechless channel of the input signal to generate a filtered audio output signal. [0011] 11. System, according to claim 10, characterized by the fact that the analysis subsystem (201,202,203,204,130,131,134, 214,114,115) is configured to derive a derived speechless channel (L + R) from at least one channel without speaking the signal and to generate each of at least some of the attenuation control values (S5, S6) to be indicative of a similarity measurement between the related speech content determined by the speech channel and the related speech content determined by the channel speechless derived from the audio signal. [0012] 12. Computer readable medium (504) which includes code for programming a processor (501) for processing data indicative of a multichannel audio signal having a speech channel and at least one speechless channel to improve the speech intelligibility determined by the signal characterized by the fact that it includes the steps of: (a) determining at least one attenuation control value (S1, S2, V3) indicative of a measure of similarity between the related speech content determined by the speech channel and content related speech determined by the speechless channel; and (b) attenuating the speechless channel in response to at least one attenuation control value. [0013] 13. Computer-readable medium, according to claim 12, characterized by the fact that it includes code to program the processor to scale data indicative of a raw attenuation control signal (C1, C2, C3, C4, C5, C6) for the speechless channel in response to at least one attenuation control value (S1, S2, V3). [0014] 14. Computer-readable medium, according to claim 13, characterized by the fact that it includes code to program the processor to compare a first sequence of related speech characteristics (Q) indicative of the related speech content determined by the speech channel, for a second sequence of related speech characteristics (P or T) indicative of the content of the related speech determined by the speechless channel to generate the sequence of attenuation control values, so that each of the attenuation control values is indicative of a measurement of similarity at a different time between the first sequence of related speech characteristics and the second sequence of related speech characteristics. [0015] 15. Computer readable medium, according to claim 13, characterized by the fact that the first sequence of related speech characteristics (Q) is a sequence of the first speech probability values, each of the first speech probability values indicates the probability at a different time when the speech channel is indicative of speech, and the second sequence of related speech characteristics (P or T) is a sequence of the second speech probability values, each of the second probability values of speech indicating the probability at a different time when the speechless channel is indicative of speech.
类似技术:
公开号 | 公开日 | 专利标题 BR112012022571B1|2020-11-17|METHOD FOR FILTERING A MULTICAN AL AUDIO SIGNAL, SYSTEM TO IMPROVE THE SPEECH DETERMINED BY A MULTICAN AL AUDIO INPUT SIGNAL AND COMPUTER-READY MEDIA US10573328B2|2020-02-25|Determining the inter-channel time difference of a multi-channel audio signal RU2467406C2|2012-11-20|Method and apparatus for supporting speech perceptibility in multichannel ambient sound with minimum effect on surround sound system JP4579273B2|2010-11-10|Stereo sound signal processing method and apparatus AU2012222491B2|2015-01-22|Apparatus and method for determining a measure for a perceived level of reverberation, audio processor and method for processing a signal EP2614586B1|2016-11-09|Dynamic compensation of audio signals for improved perceived spectral imbalances KR20130038857A|2013-04-18|Adaptive environmental noise compensation for audio playback Arehart et al.2015|Relationship between signal fidelity, hearing loss and working memory for digital noise suppression CN110473567B|2021-09-14|Audio processing method and device based on deep neural network and storage medium US10602275B2|2020-03-24|Audio enhancement via beamforming and multichannel filtering of an input audio signal BR112021012308A2|2021-09-08|EQUIPMENT AND METHOD FOR SOURCE SEPARATION USING A SOUND QUALITY ESTIMATE AND CONTROL Zarouchas et al.2005|An audio quantizer based on time domain auditory masking model Abrahamsson2005|Compression of multi channel audio at low bit rates using the AMR-WB+ codec
同族专利:
公开号 | 公开日 US9881635B2|2018-01-30| TWI459828B|2014-11-01| CN102792374A|2012-11-21| RU2520420C2|2014-06-27| ES2709523T3|2019-04-16| RU2012141463A|2014-04-20| US9219973B2|2015-12-22| JP2013521541A|2013-06-10| TW201215177A|2012-04-01| CN102792374B|2015-05-27| WO2011112382A1|2011-09-15| EP2545552A1|2013-01-16| EP2545552B1|2018-12-12| CN104811891A|2015-07-29| BR112012022571A2|2016-08-30| US20160071527A1|2016-03-10| BR122019024041B1|2020-08-11| US20130006619A1|2013-01-03| CN104811891B|2017-06-27| JP5674827B2|2015-02-25|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US5657422A|1994-01-28|1997-08-12|Lucent Technologies Inc.|Voice activity detection driven noise remediator| US5666429A|1994-07-18|1997-09-09|Motorola, Inc.|Energy estimator and method therefor| JPH08222979A|1995-02-13|1996-08-30|Sony Corp|Audio signal processing unit, audio signal processing method and television receiver| US5920834A|1997-01-31|1999-07-06|Qualcomm Incorporated|Echo canceller with talk state determination to control speech processor functional elements in a digital telephone system| US5983183A|1997-07-07|1999-11-09|General Data Comm, Inc.|Audio automatic gain control system| US20020002455A1|1998-01-09|2002-01-03|At&T Corporation|Core estimator and adaptive gains from signal to noise ratio in a hybrid speech enhancement system| US6226321B1|1998-05-08|2001-05-01|The United States Of America As Represented By The Secretary Of The Air Force|Multichannel parametric adaptive matched filter receiver| US6591234B1|1999-01-07|2003-07-08|Tellabs Operations, Inc.|Method and apparatus for adaptively suppressing noise| US20040096065A1|2000-05-26|2004-05-20|Vaudrey Michael A.|Voice-to-remaining audio interactive center channel downmix| US6442278B1|1999-06-15|2002-08-27|Hearing Enhancement Company, Llc|Voice-to-remaining audio interactive center channel downmix| KR100304666B1|1999-08-28|2001-11-01|윤종용|Speech enhancement method| AT330818T|1999-11-24|2006-07-15|Donnelly Corp|REVIEW MIRROR WITH USE FUNCTION| WO2001041427A1|1999-12-06|2001-06-07|Dmi Biosciences, Inc.|Noise reducing/resolution enhancing signal processing method and system| US7058572B1|2000-01-28|2006-06-06|Nortel Networks Limited|Reducing acoustic noise in wireless and landline based telephony| JP2001268700A|2000-03-17|2001-09-28|Fujitsu Ten Ltd|Sound device| US6523003B1|2000-03-28|2003-02-18|Tellabs Operations, Inc.|Spectrally interdependent gain adjustment techniques| US6766292B1|2000-03-28|2004-07-20|Tellabs Operations, Inc.|Relative noise ratio weighting techniques for adaptive noise cancellation| JP4282227B2|2000-12-28|2009-06-17|日本電気株式会社|Noise removal method and apparatus| US20020159434A1|2001-02-12|2002-10-31|Eleven Engineering Inc.|Multipoint short range radio frequency system| US7013269B1|2001-02-13|2006-03-14|Hughes Electronics Corporation|Voicing measure for a speech CODEC system| US20070233479A1|2002-05-30|2007-10-04|Burnett Gregory C|Detecting voiced and unvoiced speech using both acoustic and nonacoustic sensors| WO2003001173A1|2001-06-22|2003-01-03|Rti Tech Pte Ltd|A noise-stripping device| WO2003022003A2|2001-09-06|2003-03-13|Koninklijke Philips Electronics N.V.|Audio reproducing device| JP2003084790A|2001-09-17|2003-03-19|Matsushita Electric Ind Co Ltd|Speech component emphasizing device| US20040002856A1|2002-03-08|2004-01-01|Udaya Bhaskar|Multi-rate frequency domain interpolative speech CODEC system| JP3810004B2|2002-03-15|2006-08-16|日本電信電話株式会社|Stereo sound signal processing method, stereo sound signal processing apparatus, stereo sound signal processing program| JP4689269B2|2002-07-01|2011-05-25|コーニンクレッカフィリップスエレクトロニクスエヌヴィ|Static spectral power dependent sound enhancement system| WO2004040555A1|2002-10-31|2004-05-13|Fujitsu Limited|Voice intensifier| US7305097B2|2003-02-14|2007-12-04|Bose Corporation|Controlling fading and surround signal level| US8271279B2|2003-02-21|2012-09-18|Qnx Software Systems Limited|Signature noise removal| US7127076B2|2003-03-03|2006-10-24|Phonak Ag|Method for manufacturing acoustical devices and for reducing especially wind disturbances| US8724822B2|2003-05-09|2014-05-13|Nuance Communications, Inc.|Noisy environment communication enhancement system| DE60304859T2|2003-08-21|2006-11-02|Bernafon Ag|Method for processing audio signals| DE102004049347A1|2004-10-08|2006-04-20|Micronas Gmbh|Circuit arrangement or method for speech-containing audio signals| US7610196B2|2004-10-26|2009-10-27|Qnx Software Systems , Inc.|Periodic signal enhancement system| US8170879B2|2004-10-26|2012-05-01|Qnx Software Systems Limited|Periodic signal enhancement system| US8306821B2|2004-10-26|2012-11-06|Qnx Software Systems Limited|Sub-band periodic signal enhancement system| US8543390B2|2004-10-26|2013-09-24|Qnx Software Systems Limited|Multi-channel periodic signal enhancement system| KR100679044B1|2005-03-07|2007-02-06|삼성전자주식회사|Method and apparatus for speech recognition| US8280730B2|2005-05-25|2012-10-02|Motorola Mobility Llc|Method and apparatus of increasing speech intelligibility in noisy environments| JP4670483B2|2005-05-31|2011-04-13|日本電気株式会社|Method and apparatus for noise suppression| US8233636B2|2005-09-02|2012-07-31|Nec Corporation|Method, apparatus, and computer program for suppressing noise| US20070053522A1|2005-09-08|2007-03-08|Murray Daniel J|Method and apparatus for directional enhancement of speech elements in noisy environments| JP4356670B2|2005-09-12|2009-11-04|ソニー株式会社|Noise reduction device, noise reduction method, noise reduction program, and sound collection device for electronic device| US7366658B2|2005-12-09|2008-04-29|Texas Instruments Incorporated|Noise pre-processor for enhanced variable rate speech codec| WO2007098258A1|2006-02-24|2007-08-30|Neural Audio Corporation|Audio codec conditioning system and method| JP4738213B2|2006-03-09|2011-08-03|富士通株式会社|Gain adjusting method and gain adjusting apparatus| WO2007106399A2|2006-03-10|2007-09-20|Mh Acoustics, Llc|Noise-reducing directional microphone array| US7555075B2|2006-04-07|2009-06-30|Freescale Semiconductor, Inc.|Adjustable noise suppression system| EP2070389B1|2006-09-14|2011-05-18|LG Electronics Inc.|Dialogue enhancement techniques| US20080082320A1|2006-09-29|2008-04-03|Nokia Corporation|Apparatus, method and computer program product for advanced voice conversion| AT425532T|2006-10-31|2009-03-15|Harman Becker Automotive Sys|MODEL-BASED IMPROVEMENT OF LANGUAGE SIGNALS| US8615393B2|2006-11-15|2013-12-24|Microsoft Corporation|Noise suppressor for speech recognition| EP2092789A4|2006-12-12|2009-12-23|Thx Ltd|Dynamic surround channel volume control| JP2008148179A|2006-12-13|2008-06-26|Fujitsu Ltd|Noise suppression processing method in audio signal processor and automatic gain controller| EP2118892B1|2007-02-12|2010-07-14|Dolby Laboratories Licensing Corporation|Improved ratio of speech to non-speech audio such as for elderly or hearing-impaired listeners| RU2440627C2|2007-02-26|2012-01-20|Долби Лэборетериз Лайсенсинг Корпорейшн|Increasing speech intelligibility in sound recordings of entertainment programmes| JP2008216720A|2007-03-06|2008-09-18|Nec Corp|Signal processing method, device, and program| US20090010453A1|2007-07-02|2009-01-08|Motorola, Inc.|Intelligent gradient noise reduction system| GB2450886B|2007-07-10|2009-12-16|Motorola Inc|Voice activity detector and a method of operation| US8600516B2|2007-07-17|2013-12-03|Advanced Bionics Ag|Spectral contrast enhancement in a cochlear implant speech processor| DE102007048973B4|2007-10-12|2010-11-18|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|Apparatus and method for generating a multi-channel signal with voice signal processing| US8326617B2|2007-10-24|2012-12-04|Qnx Software Systems Limited|Speech enhancement with minimum gating| US8296136B2|2007-11-15|2012-10-23|Qnx Software Systems Limited|Dynamic controller for improving speech intelligibility| KR101444100B1|2007-11-15|2014-09-26|삼성전자주식회사|Noise cancelling method and apparatus from the mixed sound| PL2232700T3|2007-12-21|2015-01-30|Dts Llc|System for adjusting perceived loudness of audio signals| CA2710560C|2008-01-01|2015-10-27|Lg Electronics Inc.|A method and an apparatus for processing an audio signal| WO2009084916A1|2008-01-01|2009-07-09|Lg Electronics Inc.|A method and an apparatus for processing an audio signal| EP2269188B1|2008-03-14|2014-06-11|Dolby Laboratories Licensing Corporation|Multimode coding of speech-like and non-speech-like signals| US8577676B2|2008-04-18|2013-11-05|Dolby Laboratories Licensing Corporation|Method and apparatus for maintaining speech audibility in multi-channel audio with minimal impact on surround experience| US9373339B2|2008-05-12|2016-06-21|Broadcom Corporation|Speech intelligibility enhancement system and method| US8321214B2|2008-06-02|2012-11-27|Qualcomm Incorporated|Systems, methods, and apparatus for multichannel signal amplitude balancing| WO2010003068A1|2008-07-03|2010-01-07|The Board Of Trustees Of The University Of Illinois|Systems and methods for identifying speech sound features| US20100008520A1|2008-07-09|2010-01-14|Yamaha Corporation|Noise Suppression Estimation Device and Noise Suppression Device| WO2010064877A2|2008-12-05|2010-06-10|Lg Electronics Inc.|A method and an apparatus for processing an audio signal| US8185389B2|2008-12-16|2012-05-22|Microsoft Corporation|Noise suppressor for robust speech recognition| WO2010068997A1|2008-12-19|2010-06-24|Cochlear Limited|Music pre-processing for hearing prostheses| US8175888B2|2008-12-29|2012-05-08|Motorola Mobility, Inc.|Enhanced layered gain factor balancing within a multiple-channel audio coding system| US8620008B2|2009-01-20|2013-12-31|Lg Electronics Inc.|Method and an apparatus for processing an audio signal| JP5149999B2|2009-01-20|2013-02-20|ヴェーデクス・アクティーセルスカプ|Hearing aid and transient sound detection and attenuation method| US8428758B2|2009-02-16|2013-04-23|Apple Inc.|Dynamic audio ducking| WO2010104300A2|2009-03-08|2010-09-16|Lg Electronics Inc.|An apparatus for processing an audio signal and method thereof| FR2948484B1|2009-07-23|2011-07-29|Parrot|METHOD FOR FILTERING NON-STATIONARY SIDE NOISES FOR A MULTI-MICROPHONE AUDIO DEVICE, IN PARTICULAR A "HANDS-FREE" TELEPHONE DEVICE FOR A MOTOR VEHICLE| US8538042B2|2009-08-11|2013-09-17|Dts Llc|System for increasing perceived loudness of speakers| US8644517B2|2009-08-17|2014-02-04|Broadcom Corporation|System and method for automatic disabling and enabling of an acoustic beamformer| EP2475423B1|2009-09-11|2016-12-14|Advanced Bionics AG|Dynamic noise reduction in auditory prosthesis systems| US8204742B2|2009-09-14|2012-06-19|Srs Labs, Inc.|System for processing an audio signal to enhance speech intelligibility| EP2486567A1|2009-10-09|2012-08-15|Dolby Laboratories Licensing Corporation|Automatic generation of metadata for audio dominance effects| US20110099596A1|2009-10-26|2011-04-28|Ure Michael J|System and method for interactive communication with a media device user such as a television viewer| US9117458B2|2009-11-12|2015-08-25|Lg Electronics Inc.|Apparatus for processing an audio signal and method thereof| US9324337B2|2009-11-17|2016-04-26|Dolby Laboratories Licensing Corporation|Method and system for dialog enhancement| US20110125494A1|2009-11-23|2011-05-26|Cambridge Silicon Radio Limited|Speech Intelligibility| US8553892B2|2010-01-06|2013-10-08|Apple Inc.|Processing a multi-channel signal for output to a mono speaker| US9536529B2|2010-01-06|2017-01-03|Lg Electronics Inc.|Apparatus for processing an audio signal and method thereof| US20110178800A1|2010-01-19|2011-07-21|Lloyd Watts|Distortion Measurement for Noise Suppression System|KR101594480B1|2011-12-15|2016-02-26|프라운호퍼 게젤샤프트 쭈르 푀르데룽 데어 안겐반텐 포르슝에. 베.|Apparatus, method and computer programm for avoiding clipping artefacts| US9781529B2|2012-03-27|2017-10-03|Htc Corporation|Electronic apparatus and method for activating specified function thereof| EP2834815A4|2012-04-05|2015-10-28|Nokia Technologies Oy|Adaptive audio signal filtering| US9516418B2|2013-01-29|2016-12-06|2236008 Ontario Inc.|Sound field spatial stabilizer| EP2760021B1|2013-01-29|2018-01-17|2236008 Ontario Inc.|Sound field spatial stabilizer| PL2965540T3|2013-03-05|2019-11-29|Fraunhofer Ges Forschung|Apparatus and method for multichannel direct-ambient decomposition for audio signal processing| AU2014248232B2|2013-04-05|2015-09-24|Dolby International Ab|Companding apparatus and method to reduce quantization noise using advanced spectral extension| US9271100B2|2013-06-20|2016-02-23|2236008 Ontario Inc.|Sound field spatial stabilizer with spectral coherence compensation| US9106196B2|2013-06-20|2015-08-11|2236008 Ontario Inc.|Sound field spatial stabilizer with echo spectral coherence compensation| US9099973B2|2013-06-20|2015-08-04|2236008 Ontario Inc.|Sound field spatial stabilizer with structured noise compensation| US10141004B2|2013-08-28|2018-11-27|Dolby Laboratories Licensing Corporation|Hybrid waveform-coded and parametric-coded speech enhancement| US20160345857A1|2014-01-28|2016-12-01|St. Jude Medical, Cardiology Division, Inc.|Elongate medical devices incorporating a flexible substrate, a sensor, and electrically-conductive traces| US9654076B2|2014-03-25|2017-05-16|Apple Inc.|Metadata for ducking control| US8874448B1|2014-04-01|2014-10-28|Google Inc.|Attention-based dynamic audio level adjustment| US9615170B2|2014-06-09|2017-04-04|Harman International Industries, Inc.|Approach for partially preserving music in the presence of intelligible speech| DK3201918T3|2014-10-02|2019-02-25|Dolby Int Ab|DECODING PROCEDURE AND DECODS FOR DIALOGUE IMPROVEMENT| RU2673390C1|2014-12-12|2018-11-26|Хуавэй Текнолоджиз Ко., Лтд.|Signal processing device for amplifying speech component in multi-channel audio signal| US9747923B2|2015-04-17|2017-08-29|Zvox Audio, LLC|Voice audio rendering augmentation| US9947364B2|2015-09-16|2018-04-17|Google Llc|Enhancing audio using multiple recording devices| EP3566229B1|2017-01-23|2020-11-25|Huawei Technologies Co., Ltd.|An apparatus and method for enhancing a wanted component in a signal| US10013995B1|2017-05-10|2018-07-03|Cirrus Logic, Inc.|Combined reference signal for acoustic echo cancellation| WO2021239255A1|2020-05-29|2021-12-02|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|Method and apparatus for processing an initial audio signal|
法律状态:
2019-01-08| B06F| Objections, documents and/or translations needed after an examination request according art. 34 industrial property law| 2019-09-10| B06U| Preliminary requirement: requests with searches performed by other patent offices: suspension of the patent application procedure| 2020-06-02| B09A| Decision: intention to grant| 2020-11-17| B16A| Patent or certificate of addition of invention granted|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 28/02/2011, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US31143710P| true| 2010-03-08|2010-03-08| US61/311,437|2010-03-08| PCT/US2011/026505|WO2011112382A1|2010-03-08|2011-02-28|Method and system for scaling ducking of speech-relevant channels in multi-channel audio|BR122019024041-8A| BR122019024041B1|2010-03-08|2011-02-28|METHOD FOR FILTERING A MULTI-CHANNEL SIGNAL AUDIO AND MEDIA READABLE ON COMPUTER| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|