- Why do we use decibels? The ear is capable of hearing a very large range of sounds: the ratio of the sound pressure that causes permanent damage from short exposure to the limit that (undamaged) ears can hear is more than a million. To deal with such a range, logarithmic units are useful: the log of a million is 6, so this ratio represents a difference of 120 dB. Psychologists also say that our sense of hearing is roughly logarithmic
1. The decibel ( dB) is used to measure sound level, but it is also widely used in electronics, signals and communication.
2. The dB is a logarithmic unit used to describe a ratio. The ratio may be power, sound pressure, voltage or intensity or several other things.
3. For instance, suppose we have two loudspeakers, the first playing a sound with power P1, and another playing a louder version of the same sound with power P2, but everything else (how far away, frequency) kept the same.
The difference in decibels between the two is defined to be
10 log (P2/P1) dB where the log is to base 10.
10 log (P2/P1) = 10 log 2 = 3 dB.To continue the example, if the second had 10 times the power of the first, the difference in dB would be
10 log (P2/P1) = 10 log 10 = 10 dB.If the second had a million times the power of the first, the difference in dB would be
10 log (P2/P1) = 10 log 1,000,000 = 60 dB.
Using this as a base, next step is to understand the channel capacity formula:
Channel capacity is concerned with the information handling capacity of a given channel. It is aﬀected by:
– The attenuation of a channel which varies with frequency as well as channel length.
– The noise induced into the channel which increases
– Non-linear eﬀects such as clipping on the signal.
Some of the eﬀects may change with time e.g. the frequency response of a copper cable changes with temper
ature and age. Obviously we need a way to model a channel in order to estimate how much information can
be passed through it. Although we can compensate for non linear eﬀects and attenuation it is extremely diﬃcult to remove noise.
The highest rate of information that can be transmitted through a channel is called the channel capacity,
Shannon’s Channel Coding Theorem
• Shannon’s Channel Coding Theorem states that if the information rate, R (rH bits/s) is equal to or less than
the channel capacity, C, (i.e. R < C) then there is, in principle, a coding technique which enables transmission
over the noisy channel with no errors.
• The inverse of this is that if R > C, then the probability of error is close to 1 for every symbol.
• The channel capacity is deﬁned as:
the maximum rate of reliable (error-free) information transmission through the channel.
Shannon’s Channel Capacity Theorem
• Shannon’s Channel Capacity Theorem(or the ShannonHartley Theorem) states that:
C = B log2 ( 1 + S/N) bits/s
where C is the channel capacity, B is the channel bandwidth in hertz, S is the signal power and N is the noise
power (N0B with N0/2 being the two sided noise PSD).
as the bandwidth goes to inﬁnity the capacity goes to 1.44S/N0, i.e., it goes to a ﬁnite value and is not inﬁnite!
Note: S/N is the ratio watt/watt not dB.
Since figures are often cited in dB, a conversion may be needed. For example, 30 dB is a power ratio of1030 / 10 = 103 = 1000.