Amplifiers

Introduction

The job of any audio amplifier is simple: Create a larger copy of the input signal at its output terminals. The ideal amplifier will cover the desired frequency band ruler-flat, with no amplitude deviation, perfect phase response, no distortion, and with no added output noise. Further, it will do this in a power efficient manner, producing no excess heat, audible noise or radiated EM noise. Amplifiers come in two broad flavors: Power amps intended to drive loudspeakers, and pre-amps, line-amps, distribution-amps and similar devices used with higher impedance loads. In almost all cases, audio amplifiers are designed as constant voltage devices. In other words, they exhibit high input impedance and low output impedance, so their output voltage is independent of the load impedance. Maximum power transfer is usually not the desired mode of operation. Pre amplifiers are used to increase the level of devices with very low output signals such as microphones and the pickup cartridges used with vinyl albums. As the signal levels are so small, noise performance is usually paramount in these devices. Line amps and distribution amps deal with larger signal levels, usually in the several hundred millivolt range. Their main duty is to supply sufficient drive current to other devices that are located some distance away or to split a signal in order to drive several devices simultaneously. An example of the former case is the need to drive the stage amplifiers from a mixing desk located in the audience of a concert. An example of the latter is a device that would split a signal to both recording and broadcast feeds for a live concert. Some specialty devices are not designed to offer flat response, but rather produce a very specific equalized response. Two examples equalized amplifiers are RIAA phono pre-amps and NAB tape pre-amps. Nominal output level varies between home and professional users. The usual pro level is +4 dBu, and for consumer gear it’s -10 dBu.

Measurement

There are three items of interest in any amplifier, and possibly more. These are noise, frequency response, and distortion. Noise levels are normally measured using a weighting filter in order to correlate the value to the human hearing mechanism. Essentially, the input to the device is shorted (or possibly terminated with a specific impedance), and a weighting filter is placed at the output. A highly sensitive RMS reading AC voltmeter is then placed at the output of the weighting filter. The result is the weighted RMS output noise voltage. This value is divided into the nominal output level to arrive at the signal to noise ratio (normally expressed in decibels). Frequency response is measured by recording relative gain across a range of frequencies. 1 kHz is often is as the reference frequency. A decibel reading voltmeter is very handy for this. It must be noted though that the voltmeters used for both noise and frequency response measurements need to be wideband types, covering the range of human hearing. The endpoints, or corner frequencies, are those where the gain drops by 3 dB (half power). If the rolloff rates are first order (i.e.m 6 dB per octave), the corner frequencies may also be determined via rise and fall times of square waves. The upper break can be found as .35/Trise where Trise is the amount of time it takes the leading edge of the wave to traverse from 10% to 90% of its peak value. The lower break my be determined from .35/Tfall where Tfall is the amount of time it takes the trailing edge of the wave to traverse from 90% to 10% of its peak value. Note that in both cases it is imperative that the frequency be low enough so that the waveform eventually “flattens out” at the end of the cycle.

Distortion measurements usually come in two flavors: The simple Total Harmonic Distortion (THD) and the somewhat more complex Intermodulation Distortion (IMD). THD is measured by feeding the device with a very pure sine wave. The output of the device is fed into a distortion analyzer, which is little more than a notch filter followed by a sensitive voltmeter. The notch filter is tuned to the test frequency. Whatever is left over is distortion products plus any residual noise. This signal can then be referenced to the non-filtered output yielding a percent THD specification. This process is often repeated over a range frequencies, as there is no guarantee that the THD spec at say, 1 kHz, will be the same as it is at 100 Hz. The IMD spec is a little different in that it utilizes two sine waves, although the measurement concept is similar. The dual tones will result in intermodulation products that are not produced from the simple THD test. It is worth noting that there is disagreement in the industry as to the audibility of very low levels of distortion on normal program material. There are those who believe that anything below 1% is not worthy to discuss and an equally determined group who believe that smaller values may still be audible. A major argument of the first group is that seeking to lower THD’s further below 1% may cause audible side effects. That is, although the amplifier may “bench test” better, it does not follow that it will sound superior (it may in fact sound worse). A common technique to lower THD is to use very heavy levels of negative feedback, but it has been shown that this “sledge hammer” approach can lead to other, more subtle, forms of distortion that would not be picked up by a simple THD test.

Other specifications of interest include slew rate (maximum rate of change of output voltage) and the associated power bandwidth, input and output impedance, maximum output level (for both voltage and power), and phase response. For power amplifiers, a key parameter is the maximum output power into a specific load impedance.

FTC Power Measurement

The average consumer generally believes one thing about power amplifiers: More is better. Understanding this, manufacturers in the 1960’s and early 1970’s began to post outrageous claims about the output power capabilities of their amplifiers. Very bizarre means were used to obtained the highest possible numbers, including summing both channels, overdriving to high distortion levels, using very short-term (transient) measurements, and so. The situation became so bad that the Federal Trade Commission stepped in and proclaimed a standard test measurement procedure and advertising specification so that consumers reasonably could compare models from different manufacturers. The idea is fairly simple. First, the amplifier under test must be pre-conditioned. This means that it must be run at one-third of its rated power for one hour before the measurements commence. This is a fairly stringent warm-up and insures that whatever is measured is a reasonable long-term value, not something that can only be sustained for a very brief interval under ideal conditions. Beyond this, both channels of a stereo amplifier must be driven simultaneously into a stated load resistance, with the per-channel power being reported. The test is conducted across a stated range of frequencies and total harmonic distortion (THD) measurements are taken throughout. The worst case THD is included. An example specification for a stereo amplifier reads as follows: 100 Watts RMS per channel into 8 Ohms with both channels driven, from 20 Hz to 20 kHz, with no more than .1% THD. Any statement with less information than this is not a legal specification (e.g., an ad stating “100 Watt output per channel” or “100 Watts from 20 Hz to 20 kHz”).

In an extension to the original idea, some manufacturers began reporting headroom. Almost all consumer amplifiers are “under rated”, meaning that they “bench” better than their spec. The difference between the spec and the actual limit (specified in decibels) is the amplifier’s headroom. Note that a manufacturer can play with the numbers a little. If one is willing to accept more distortion or decreased headroom, one can usually squeak out a few more Watts for the specification. Another extension that was tried was the reporting of output power in dBW (decibels relative to 1 Watt). So, instead of saying that the maximum output power was 100 Watts, the spec would claim 20 dBW. Although this makes perfect sense to the technician or engineer, the idea was quickly abandoned[1].

It is worth noting that in its original wording, the FTC Amplifier Rule applied only to home consumer audio equipment. It did not apply to professional equipment. Also, car audio was in its infancy at the time, and little thought was given to it. Unfortunately, this loophole loomed large as car audio grew, and again the marketers began to make inflated claims. For example, many car systems used 4 channel amplifiers (2 front, 2 rear), and it was not uncommon to see “60 Watts” for a 4x15 Watt system. While manufacturers generally followed this ruling to the letter through the 1970’s and 1980’s, recent years have seen somewhat of a lapse. Along with the misleading reporting of car audio specs, home equipment specs have become slack. In any case, given the disparity between the norms for car and home audio, usually it is not accurate to compare amplifiers with similar power ratings between the two worlds.

A good technician or engineer will spot a misnomer in the wording of the specification, and that is the use of the term “Watts RMS”. There is, of course, no such thing as an RMS Watt. RMS is a measurement/calculation technique used to derive power from AC voltages (or currents). The term arises from the fact that modern amplifiers are designed as constant-voltage devices. Thus, output power is determined by measuring the output RMS voltage and then computing the power via V2/R rather than through the use of a Wattmeter. The more accurate wording of “so-many Watts as calculated from the RMS voltage” is far too cumbersome for the average consumer (and quite meaningless as well), so the artificial term “Watts RMS” was born.

In late 2000, the FTC revisited this ruling because two items began to complicate the issue: Self-powered loudspeakers, and multi-channel systems such as 5.1.

Example Problems

1.  Q: The output of a distortion analyzer is 2 mV for a nominal output of .775 volts. What is the THD spec? A: THD = 2 mV/.775V = .258%.

2.  Q: What is the equivalent output rating of a 200 Watt amplifier when specified in dBW? A: P’=10log10(P/Reference), P’=10log10(200 W/1 W), P’=23 dBW.

3.  Q: Which represents the larger factor increase, 10 Watts to 40 Watts, or 10 dBW to 20 dBW? A: If you convert from Watts, for the first pair you have 10 dBW and 16 dBW, so the second pair exhibits the larger factor (6 dB vs. 10 dB). If you convert from dBW, for the second pair you have 10 Watts and 100 Watts, for a factor of 10 versus a factor of 4 for the first pair. Note that 6 dB is a factor of 4 for power, and that 10 dB is a factor of 10 for power, so this crosschecks.

3

ET163 Audio Technology Lecture Notes: Amplifiers

[1] It is the author’s opinion that it was abandoned largely due to the fact that manufacturer’s thought it would cut sales. First, the dBW values are smaller than corresponding Watt values for typical amplifiers, so a consumer comparing a 20 Watt amp vs. 20 dBW amp might not realize that the former unit has only one-fifth the power of the latter. Also, upgrading was less enticing. Jumping from 50 Watts to 100 Watts sounds like a large increase, but the equivalent of 17 dBW to 20 dBW seems far less so, especially when one remembers that it takes an 8 to 10 dB increase for a subjective doubling of loudness.