Music/EE 65 lecture notes

September 16

Micing—how do you decide what and where?

• What's the instrument?
• What's the performance style?
• Is the room sound good? Is it quiet?
• Are there other instruments playing at the same time?
• How much room sound do you want?
• What mics do you have?
• Do you want stereo or mono? How much stereo?

Good positioning is always better than trying to fix later. Good positioning means phasing is favorable: hard to fix with eq!

Mics need to be closer than our ears, since we don't have the visual cues to tell us what to look for, and mics can't distinguish between direct and reflected sound. We always want more direct sound in the recording. Can add reflections (echo/reverb) later, but impossible to remove them.

Listening to the instruments in the space: finding the right spot to record. Get the room balance in your ear, then take two steps forward and put the mic there.

3-to-1 rule: when using multiple microphones, mics need to be at least three times as far away from each other as they are from their individual sources.Winds & Strings: at least 3 ft away from source if possible, except when it would violate 3-to-1 rule!

String sections: mic in stereo as an ensemble, not meant to be a bunch of soloists.

Horn sections, can go either way: mic individually or if there is enough isolation from other instruments, as section.

Guitar (acoustic): exception since we are used to hearing close-miked guitars. But there is no one good spot on the guitar, since sound comes from all over the instrument: soundhole (too boomy by itself), body, top, neck, headstock. Best to use 2 mics, or if room is quiet, from a distance.

Guitar amp: Experiment to see what sounds best, maybe close (dynamic usually just fine—not a lot of highs and lows), maybe far, maybe both (condensor for room sound).

Piano: exception since pianists like the sound of the instrument close up--doesn’t really need the room to expand. Different philosophies for pop and classical. 3:1 rule on soundboard, or even better, 5:1 since reflections are very loud and phase relationships very complex. Can used spaced cardioids, spaced omnis, or coincident cardioids, in which case you want to reposition them for the best balance within the instrument (bass/treble).

Drums: first of all, make them sound good! Tune them, dampen rattles, dampen heads so they don’t ring as much (blanket in kick drum).
Three philosophies--Choice will depend on spill, room sound, and how much power and immediacy you want in the drums.
1) stereo pair overhead (cardioid or omni); good for jazz, if you don’t mind some spill, or if they’re in a good-sounding isolation room.
2) add kick (dynamic or high-level condensor) and snare mics for extra punch and flexibility
3) add mics to everything. Complicates things because of spill, may have to add noise gates (isolating individual drums) later.

Mic techniques for stereo: none of them are perfect! XY, ORTF, DIN, NOS, MS, spaced omni, spaced cardioid, Decca tree.


September 11

How pickup patterns are designed into mics:
Omni, picks up only from the front. Slight shadow effect on sound from the rear, but otherwise truly omni at all frequencies.
Cardioid. Sound from rear arrives at rear before it arrives at the front. So this mic has ports that feed sound from the rear with a slight delay, created by labyrinth or material that slows down the sound, so that sound arrives at front and back of the diaphragm at the same time, cancelling each other out. Sounds arriving from the front cause the greatest difference in pressure between front and back; sounds from sides and rear least pressure.
Figure 8 has one diaphragm open to both sides. Sounds from side arrive on both sides at the same time, cancel each other out.
Hypercardioid, supercardioid, shotgun: variations using different types of ports.

Frequency response changes off axis
, not linear. Causes coloration in off-axis instruments. When using multiple mics in a setup, have to be careful with this.

Proximity effect: Sound entering a microphone has two components: phase difference and amplitude. A directional mic blocks sound from the rear and sides. Sound from the front goes to the front of the diaphragm. Sound from the rear goes to the front and through a labyrinth to the rear. Difference of arrival time/phase moves the diaphragm. As frequency rises, phase difference goes up—fixed arrival time means that it becomes a greater proportion of the waveform. Slope is 6 dB/octave. Electronics built in to compensate: bring down high end @ 6 dB/octave.

Omnidirectional mics don’t block sound from any angle, so the frequency response is flat—don’t need compensation!

Amplitude is actual air pushing against the diaphragm. At distances, because of the inverse square law, the effect on the diaphragm is negligible. But as distance increases, amplitude differences between front and back get higher, and at very short distance, amplitude component overwhelms the phase component. The amplitude component does not change with frequency—result is that the highs are attenuated but the lows are not. Hence, bass boost.

Microphone techniques: respect the historical use of instruments!
for Vocals: pop filters, monitors
for Piano: stereo image?
for strings: not on top of the bridge. Too close, loses resonance and high frequencies

Impedance: electrical characteristic, has to be matched to effect efficient energy transfer. In audio, best to have a low-impedance source feeding a high-impedance input. If impedances are not matched correctly, signal reflects back along the line, causing loss. At high frequencies (above 1 MHz) this also causes frequency anomalies, but not at audio.

Cables: Balanced vs. Unbalanced:
Balanced = two conductor and surrounding shield or ground. Two conductors are in electrical opposition to each other — when one has positive voltage the other has negative. At receiving end, one leg is flipped in polarity—also called phase—and the two are added. If noise is introduced, it affects each conductor the same. If you flip any signal and add it to itself, the result is zero. Because it is flipped at the receiving end, the noise cancels out. This means there is little noise over long lengths of cable. Best for microphones, which have low signal levels, but also for long lengths of line level.
Unbalanced = single conductor and shield. Cheaper and easier to wire, but open to noise as well as signal loss over long length, particularly high frequencies due to capacitance (of interest to EEs only). Okay for line-level signals over short distances (like hi-fi rigs or electronic instruments), or microphones over very short distances (cheap recorders and PA systems).
Connectors: Balanced: XLR (as on microphone cable), 1/4” tip-ring-sleeve.
Unbalanced: RCA (“phono”), 1/4” (“phone”), mini (cassette deck or computer).
Mini comes in stereo version also (tip-ring-sleeve), for computers and Walkman headphones (both channels share a common ground). 1/4” TRS is also used as a stereo cable for headphones = two unbalanced channels with a common ground.

Guitar pickups = two kinds: piezo (mechanical vibration to electric current) and magnetic (metal vibration to electric current using fixed magnetic field: humbucker is special type)

DI boxes = transformers to match level and impedance of instrument to that of mic input on console.


September 9

Phase: where in the waveform you are at any moment. Hearing absolute phase is difficult, but relative phase between two signals is easy. Localization in the human hearing system uses amplitude, time (phase), and frequency response. It is especially sensitive to phase: interaural effect. It tells us quickly about the location of a sound as it arrives at our two ears, along with relative amplitude.
Head acts as bafffle for high frequencies, so relative amplitude used more for localization at high frequencies. Low frequencies bend around the head, so phase is more important. (It’s why you need only one channel of subwoofer in a surround system.)

When waves coincide, energy is increased. When waves are in opposition, they cancel each other. If you take two identical complex signals and delay one, which happens in a reflection in a room or the pinna (earlobe), the various harmonics will cancel or reinforce each other based on the delay period. This is comb filtering.
Also, tiny reflections within the pinna clue us in on directionality, because spectrum changes.
It means that the frequency spectrum of what you’re hearing changes if you move, even very slightly. So turning your head to localize a sound changes the phase, the timing, and the spectrum. We learn how to use this very well early in life.
NIH study: People who lose their earlobes and are given new ones have a lot of trouble re-learning how to localize sounds.
If you change the delay period, the phase cancellations move, creating the phasing or flanging effect. Sound isn't moving, but seems like it should be.

The role of the room: standing waves or “room modes” caused by phase reinforcements due to the reflections in the room. Based on dimensions of the room. The more reflective the walls, the greater the problem. Lots of techniques for minimizing these including absorbers, diffusors, "traps".
More a problem at low frequencies, since specific frequencies stand out. At higher frequencies, they blend together, not nearly as obvious.
Three types of room modes: Axial, tangential (-3 dB. 1/2 level), oblique (-6dB, 1/4 level)
Calculate them with this utility: http://www.mcsquared.com/modecalc.htm

Effects of speaker placement on frequency response: in a corner or against a wall, bass is emphasized. Some speakers are designed to go in corners--their low-end response is tailored to compensate.

Transducer = converts one type of energy to another
Microphone = converts sound waves in air to Alternating Current (AC) voltages. Dynamic Microphone has a magnetic metal diaphragm mounted inside a coil of wire. Diaphragm vibrates with sound waves, induces current into coil, which is analog (stress the term!) of sound wave. This travels down a wire as an alternating current: positive voltage with compression, negative voltage with rarefaction.

Dynamic/moving coil (pressure-gradient mic)
Condensor/capacitor=charged back plate + diaphragm acts as capacitor, one plate moves, capacitance changes.
Charge comes from battery, or permanently-charged plate (electret), or dedicated power supply (old tube mics), or phantom power: 48v DC provided by mixer (doesn’t get into signal, because input transformer removes it).
Ribbon (velocity mic)= Metal ribbon is suspended between strong magnets, as it vibrates it generates a small current. High sensitivity, good freq response, a little delicate, figure-8 pattern.

Boundary mics (pressure zone)
Owned by Crown. Mic element is very close to wall. Hemispherical pickup, reflections off of wall are very short, essentially non-existent, prevents comb-filtering caused by usual reflections, even frequency response. Not good for singing, but good for grand piano (against soundboard), conference rooms, theatrical (put on the stage, pad against foot noises).

Pickup patterns: Omndirectional, Cardioid, Figure 8 (bi-directional), Hypercardioid, Shotgun.


September 4

Characteristics of a sound:
Frequency in Hz: how many vibrations or changes in pressure per second.
Loudness in dB SPL: how much air is displaced by the pressure wave.
Timbre = complexity of waveform, number and strength of harmonics. We can change timbre with filters or equalizers.Waveforms = simple and complex

Simple waveform is a sine wave, has just the fundamental frequency. Other forms have harmonics, which are integer multiples of the fundamental. Fourier analysis theory says that any complex waveform can be broken down into a series of sine waves.
Saw: each harmonic at level 1/n. Square, only odd harmonics at 1/n. Triangle, only odd harmonics at 1/n2
If there are lots of non-harmonic components, we hear it as noise.
White noise: equal energy per cycle (arithmetic scale)
Pink noise: equal energy per octave (logarithmic scale-more suited for ears)

Stereo = since we have two ears. Simplest and best high-fidelity system is walking around with two mics clipped to your ears, and then listening over headphones: this is called binaural. Binaural recordings are commercially available: they use a dummy head with microphones in the earholes.
Systems with speakers are an approximation of stereo. The stereo field is the area between the speakers, and the “image” is what appears between the two speakers. If you sit too far from the center, you won’t hear a stereo image.
Multi-channel surround can do more to simulate "real" environments. Quad, 5.1 (.1=LFE since low frequencies are heard less directionally), 7.1, 10.1, etc. Will do a little with it in this course.
Position in the stereo or surround field = L/R, F/B, U/D. Determined by relative amplitude, arrival time, and phase.

Perception: dynamic and frequency range of human hearing
Ear converts sound waves to nerve impulses.
Each hair or cilium responds to a certain frequency. As we get older, hairs stiffen, break off, and high-frequency sensitivity goes down. Also can be broken by prolonged or repeated exposure to loud sound.
How frequency sensitivity changes at different loudness levels: at low levels, we hear low frequencies poorly, and high frequencies too, although the effect isn’t as dramatic.

Fletcher-Munson curve: ear is more sensitive to midrange frequencies at low levels, less sensitive to lows and extreme highs. In other words, the frequency response of the ear changes depending on the volume or intensity of the sound. When you monitor a recording loud, it sounds different (better?) than when soft.

Using filters/eq to change frequency response. graphic, parametric, High Pass, Low Pass, BandPass, Notch
EQ used to solve problems, and to be creative.
The smallest difference we can hear in a level of sound--the “Just Noticeable Difference (JND)”--is 1 dB. This changes with frequency and loudness level. We can often hear much smaller differences under some conditions, and not hear larger ones under different conditions. Also, JND changes with duration--short sounds (<a few tenths of a second) seem softer than long sounds of the same intensity.

Haas effect: precedence of first-arriving signal. <35 ms later, second sound is blended. 35<50 ms, second sound is heard as ambience. >50 ms, distinct sounds. Lower values with transient sounds like drums.

Distortion
• Bandwidth limitations
• Frequency response anomalies=like a filter or eq
• Dynamic range limitiations
• Distortion caused by clipping or non-linearity: adds odd harmonics, particularly nasty (show in Reason)=harmonic distortion
• Crossover distortion= certain types of amplifiers, where different power supplies work on the negative and positive parts of the signal (“push-pull”). If they’re not balanced perfectly, you get a glitch when the signal swings from + to - and vice versa.
• Intermodulation distortion=frequencies interacting with each other.
• Noise, hum, extraneous signals


September 2

Basic audio principles:

Nature of Sound waves = pressure waves through a medium = compression (more molecules per cubic inch) and rarefaction (fewer molecules per cubic inch) of air. A vibrating object sets the waves in motion, your ear decodes them. Sound also travels through other media, like water and metal. No sound in a vacuum, because there’s nothing to carry it.
Speed of sound in air: about 1100 feet per second. That’s why you count seconds after a lightning strike to see how far the lightning is: 5 seconds = one mile. Conversely, 1 millsecond = about 1 foot.
Sound travels a little faster in warmer air, about 0.1% per degree F, and in a more solid medium: in water, 4000-5000+ fps, in metal, 9500-16000 fps.
When we turn sound into electricity, the electrical waveform represents the pressure wave in the form of alternating current. The electrical waveform is therefore an analog of the sound wave, Electricity travels at close to the speed of light, much faster than sound, so transmission of audio in electrical form is instantaneous.

Characteristics of a sound:

Frequency = pitch, expressed in cycles per second, or Hertz (Hz).
The mathematical basis of the musical scale: go up an octave = 2x the frequency.
Each half-step is the twelfth root of 2 higher than the one below it. = approx. 1.063 The limits of human hearing = approximately 20 Hz to 20,000 Hz or 20 k(ilo)Hz. Fundamentals vs. harmonics = fundamental pitch is predominant pitch, harmonics are multiples (sometimes not exactly even) of the fundamental, that give the sound character, or timbre.
Period = 1/frequency
Wavelength = velocity of sound in units per second/frequency

Loudness (volume, amplitude) = measured in decibels (dB) above threshold of audibility (look at chart). The decibel is actually a ratio, not an absolute, and when you use it to state an absolute value, you need a reference. “dB SPL” (as in chart in course pack) is also referenced to the perception threshold of human hearing. Obviously subjective, so set at 0.0002 dyne/cm2, or 0.00002 Newtons/m2. That is called 0 dB SPL. By contrast, atmospheric pressure is 100,000 Newtons/m2
dB often used to denote a change in level. A minimum perceptible change in loudness is about 1 dB. Something we hear as being twice as loud is about 10 dB louder. So we talk about “3 dB higher level on the drums” in a mix, or a “96 dB signal-to noise-ratio” as being the difference between the highest volume a system is capable of and the residual noise it generates.
“dBV” is referenced to something, so it is an absolute measurement. “0 dBV” means a signal referenced to a specific electrical voltage in a wire, which is 1 volt. “0 dBu” is referenced to 0.775 volts, but it also specifies an impedance of 600 ohms. We’ll deal with impedance later. Common signal levels in audio are referenced to that: -10 dBV (consumer gear), +4 dBu (pro gear)
The threshold of pain is about 130 dB SPL, so the total volume or “dynamic” range of human hearing is about 130 dB.

Waveforms = simple and complex
Simple waveform is a sine wave, has just the fundamental frequency. Other forms have harmonics, which are integer multiples of the fundamental.
Timbre = complexity of waveform, number and strength of harmonics.


assignments

©2014 Paul D. Lehrman, all rights reserved