The last resort in the production chain. Job is to make things sound as good as possible, highly-trained, great ears, best audio equipment in the business. But they are being pressured by labels to make things loud.
They are also being replaced by home-mastering tools, like Waves Ultramaximizer and DP Masterworks, which can do the same job, but without the training, taste, or subtlety.
The loudness wars
Ben Folds files: two versions, squashed and (by audience demand) unsquashed
Need to limit dynamic range, first for cars (dbx in early 1980s made car players
with switchable compressers, didn't catch on, now back), then for walkmen,
now for iPods. Unlike PCM digital recording, MP3s are noisy, and benefit from
having signal as high as possible above noise.
Desire to limit dynamic range and make songs loud as possible so they stand out on radio or internet. Louder sounds better, at least in short-term.
Engineer Bob Ohlsson:
It all comes down to a bunch of people listening to five records, and four of them are gonna go into the wastebasket. Well, an artist, a manager, an A&R person or anybody who happens to be hanging out in that circumstance is going to quickly notice that something that is at a lower level is at a pretty big disadvantage. So there's great paranoia that drives the level thing.”
WHAT HAPPENS AFTER SCHOOL? Careers in audio (part 2)
Formatting and conforming audio for Web (Flash, etc.)
Games: translation of music>MIDI and vice versa (Guitar Hero)
Studios, project/home studios
Industrial (PA, background music)
Sequencers and performance programs
Pro and consumer audio hardware
Software companies (sequencers, instruments, plug-ins)
Hardware (instruments, audio components, computers)
Tech writing, documentation
Sales & marketing
Quadraphonic introduced early 1970s. Competing formats: QS, SQ (matrix encoded), CD-4 (subcarrier, like FM stereo, so rear channels are discrete). Only true discrete format: analog tape. Failed. Some bizarre recrodings, placing the listener in the middle of the NY Philharmonic, with the orchestra lined up against the walls.
Surround took off in movie theaters, with speakers along the side walls, and
then when home theaters started to get big, moved into domestic market. Delivery
system: Video DVD with multitrack audio.
Most common: 5.1 Three front, two rear ("surround"), subwoofer (.1) for low-frequency effects (LFE). Low frequencies not perceived directionally, so you don't need discrete subwoofers. LFE must be good up to 125 Hz. Not really designed to be used for music, just for special effects in movies like Godzilla and Earthquake. Other possibilities, not standard: 6.1, 7.1, 10.2, etc.
Must mix in 6-channel environment, altough delivery systems often encode into fewer channels. Best is to use five matched full-range speakers, can be small, + sub.
Two major systems for film: Dolby Digital (also called AC-3), uses lossy compression and reduces bandwidth by 90%. DTS compresses by about 65%. Needs to be encoded at mastering end, decoded at consumer end. Encoding software is expensive ($1K+), since the technology needs to be licensed from Dolby. When mixing for those formats, you need to have an encoder and decoder in the studio so you can check to see how it translates.
Dolby E for broadcasters: collapses 6 channels down to 2 (AES/EBU) since satellites, video recorders, and other broadcast chains are only set up for 2 channels. Lossy, but clean: Can often go through five or six generations before you notice any problems, according to Dolby.
For music, DVD-A (4.7 GB as opposed to 700 MB) uses lossless compression (Meridian Lossless Packing, about 50% data reduction) to allow audio on multiple discrete channels. Can also use higher sampling rates, but if you go too high, you sacrifice channels. Pretty much dead.
Super Audio CD (SACD) also high capacity, uses Sony's Direct Stream Digital: 1-bit, 2.8MHz sampling rate. Often dual-layer discs: SACD on top, normal CD underneath. Many of BMOP's recordings are released in this format.
Also DTS audio compressed onto standard CDs, but never caught on.
Micing: no clear way to mic in surround. Use ambience mics, do multitrack recording.
Exception: Ambisonic: three-capsule "Soundfiled" microphone, uses encoding into "B-format", which is four channels: W, X, Y, Z. Can then be decoded into any number of channels. Great for classical music in a good hall.
In the basic version, known as first-order Ambisonics, sound information is encoded into four channels: W, X, Y and Z. This is called Ambisonic B-format. The W channel is the non-directional mono component of the signal, corresponding to the output of an omnidirectional microphone. The X, Y and Z channels are the directional components in three dimensions. They correspond to the outputs of three figure-of-eight microphones, facing forward, to the left, and upward respectively. (Note that the fact that B-format channels are analogous to microphone configurations does not mean that Ambisonic recordings can only be made with coincident microphone arrays.)
Mixing: Where do you put the listener? Where do you put the sounds? In film,
dialog is in the center, music and effects on the sides, ambience and special
effects in the rear. Sometimes put dialog somewhere else where you need a character
to be moving or off-screen.
In music, not as clear: Is the rear just for reflected hall sound, or do you want the audience to feel in the middle of the orchestra/band? For electronic music, there's no objective reality. Can literally make the room spin.
If you mix a vocalist into the center, how much of that should also be in the L+R? "Focus" control in some mixers determines this. Some mixers don't use the center at all, but let the decoder create a reduced-level L+R there. Using center means sweet spot is bigger.
Music mixing usually doesn't have an LFE track. Instead, most decoders have a "bass management" feature which filters out below 80 Hz and sends it to the subwoofer. Only LFE if you are doing the 1812 Overture or need really throbbing synth bass.
Have to be careful with vocal plosives and room rumble: it may not show up on your nearfiled monitors, but will end up in the subwoofer.
Most reverbs are only stereo, so you need at least two to do surround, one for front and one for rear. True surround reverbs are very expensive: Waves has a surround reverb plug-in that costs $1200—just reduced to $750.
Mixing in ProTools: use multiple sends and/or outputs. MBoxes have 6 outputs,
Can use software sends to move things around (individual send controls on each
channel), but much easier with dedicated plug-in, like Neyrinck Mix 51, on
instructor machine only.
Check stereo compatibility: lots of people will be listening in stereo!
THE PRIORITIES , according
to Prof. Lehrman (arguments welcome!)
When it comes to the quality or success of a recording, each item on this list trumps the ones below it.
Performer — Without a good performance, it's barely worth recording
Instrument — A great performer can make a lousy instrument sound pretty good, but it's much easier if the instrument is great too
Mic placement — Too far, too close, off-axis can ruin a recording
Room — If the instrument sounds good in the room, it's easier to make it sound good on the recording
Microphone — The right type is more important than the brand or model, but small differences in the frequency response of mics can affect how good a source sounds
Monitors — If you can't hear what you're recording, everything else is guesswork
Control room — If the control room is screwing up the monitors...see above
A/D converter / Mic preamp — It's hard, but not impossible, to find bad A/D converters and preamps. Expensive preamps have a marginal effect on the sound.
Analog mixer/channel strip — Lack of noise, lack of distortion, flat frequency response, and adequate headroom are what's needed here. Hard to find modern equipment that doesn't provide all that.
Master clock — Important if you're working with an external video source; otherwise not necessary as long as your A/D converter is good, and all of your equipment is sync'ed properly to it.
Plug-ins / Outboard — For most purposes—eq, compression, delay, reverb—the right type is more important than the right brand. Unique plug-ins like Melodyne or iZotope RX are for special purposes.
DAW — They all have all the features you need. Which one you use is a function of which one you like, and fits your workflow best.
Computer — Mac or PC? The recording doesn't know the difference. Other factors—support, ease of use, comfort level—can be important.
Cables — Good cables are important, in terms of long life, reliability, and resistance to interference. Really expensive cables are a marketer's dream and add nothing.
WHAT HAPPENS AFTER SCHOOL? Careers in audio (part 1)
Games: loops, layers, transitions
Assisting composers, esp. film/TV
Assisting studios, producers, artists
Music direction and playing for artists, theater
Books on tape/Podcasts
Audio for visuals
Why sync a computer sequencer to something else?
sync audio with video; sync multitrack tape with MIDI and hard-disk audio.
From the beginning of synthesis, musicians wanted to sync sequencers to tape recorders.
All devices must know: 1) what time it is, i.e., where we are in the program 2) when to start 3) how fast to go 4) what direction to go.
SMPTE time code: what is it? Originally for video. Analog signal accompanies video signal on a separate track, with digitally encoded information about video timing.
Follows video frame rate, 29.97003 (not 30!). Describes the beginning of each frame with a number, consisting of an 80-bit word. Lines the first bit up with the beginning of the frame. Hours, minutes, seconds, frames. When a program refers to "Subframe" it’s talking about bit count, or 1/80 frame. One machine is master, others are slaves. Synchronizer compares numbers, controls speed of slave machines to conform. Change in speed is called “slew”. Needs to be inaudibile.
Signal is more or less a 2400-Hz square wave, with “sub-harmonics” at 1200, 600, etc. Can easily be recorded on audio tape, or audio track of videotape. Also called Linear Time Code.
Sounds awful and can blow your speakers.
Used in analog tape-to-tape sync, like multiple 24-tracks. Accuracy is close enough for analog (1/2400 sec=0.416 msec), but not for digital.
Two flavors, drop-frame and non-drop, just different ways of counting. Drop-frame is (almost) real-world accurate; non-drop uses continuous numbers.
Timecode is off by 3.6 seconds/hour (0.1 %). Drop-frame skips the first two
frame numbers (0,1) at the start of each minute, except at minutes 0, 10, 20,
30, 40, and 50:
01:03:59:29 is followed by 01:04:00:02
2 frames x (60-6)=108 frames = 3.6036 seconds (@ 29.97). Error is reduced to .0036 seconds/hour=2.59 frames/day.
Studios and broadcast stations reset timecode clocks daily.
EQ, 2 compressor/expander/gates, exciter, de-esser, transient processor, limiter. Can re-order modules on “graph “ page.
EQ: Eight bands of bell, highpass, sharp highpass, lowpass, sharp lowpass, high shelf or low shelf filters. Adjust frequency and gain with balls, and adjust Q with handles around each ball.
A spectrum analyzer operates in the background.
Hold down the option key to create a notch filter that sweeps the spectrum, Moulton-style
Dynamics: Digital or "vintage" simulation, hard or soft knee. Multiband option gives you separate compression/gating on three different bands with adjustable crossover frequencies.
Transients: emphasize or de-emphasize transients. individual gain for attack and sustain portion of signals. Adjustable timing of attack and sustain windows.
Exciter: adds odd and/or even harmonics in different balances. Makes sounds stick out more. Also useful when you have a sound with troublesome high frequencies--you can equalize them out, and more or less rebuild them with the exciter.
Limiter: soft or brickwall. Phase reverse one channel or both.
MIDI: What is it?
Communications protocol, developed 1983, response to growing racks of synths on stage and incompatiblities between manufacturers.
Who owns it
Public domain. A specification, not a standard, no legal or official standing. The industry agreed to support it, market forces keep it in line.
A living language: many holes in the spec for future development. Participation from all corners of the industry: hardware mfrs, software mfrs, systems designers
Seems slow by today's standards, but is still effective for its purpose.
MIDI is not music, not audio, but it is a representation of a musical performance, like a score, or a player piano roll. Every performance nuance is communicated, without the actual music. Notes, sliders, knobs, pedals, patch changes, other parameters.
Must be stored digitally — a sequencer: a list of instructions (commands) with their timings. Sequencer can be a computer, and the sequence is a computer file. Stored on disk, you can move it around between studios, or over a network. Sequence typically very small, <50k.
On stage, one keyboard could act as master and play all the others. In studio,
a central controlling sequencer could control an entire orchestra of synths.
Principle of distributed processing: central controller has performance data, while the actual sound is produced by the remote devices = Many remote devices controlled from central source.
Since performance data is broken down into gesture parameters, can isolate individual performance parameters: change key velocity, or note number, or pitchbend setting, or instrument, or rhythm without changing other performance parameters
Prepare an entire performance, change any parameter, singly or globally, at any time.
Goes in one direction — from
out to in.\ Can’t split electrically, because voltage
will drop, so Thru is reflection of in -- allows daisy-chaining.
digital — serial, 8-bits of data. Idling means voltage is on. Start (off) and stop (on) bit. On bit is zero. (Parallel would have been more expensive. Oberheim tried it, failed to catch on.)
31.25 kbaud = 31,250 bits/sec = 3,125 bytes/sec (8 data bits + two buffer bits= 10 bits/byte)
Now often sent virtually, in system software as InterApplication communication, or on USB, Firewire, or Ethernet. Different standards for these, but not too many, so it still works.
Status or Command byte (>127, first bit is 1) is instruction, data byte(s) (≤127, first bit is zero) is value.
example: Note-on (command) + note number (value) + key velocity (value)
Command set - some commands are defined as having 2 data bytes, some have 1 data byte.
Receiving device knows what to expect. Incomplete command is usually ignored.
Command byte is in two parts. First part is command, second part is channel #:
1xxx nnnn. nnnn goes from 0 to 15, but we call them 1 to 16
Early MIDI devices only read one channel at a time, ignored data on other
channels. Means you can use different devices on the same MIDI cable.
Modern synths, called "multitimbral", sort out data by channel, assign to different sounds in the instrument.
Note on: [144+n] (9n) + note number + velocity
Note off: [128+n] (8n) + note number + velocity
Controllers: [176+n] (Bn) + controller number (mod wheel, volume, pan, sustain) + value
127 in all per channel.
Many controllers defined, some as transmitters (mod wheel), some as receivers (volume), some as both (sustain pedal).
Program change: [192+n] (Cn) + value, 0-127, often but not always called 1-128.
Calls up a register in the synth's memory. If you have more than 128 programs
(today, most do!), you use a Bank Select message (Controller #0) before the
Difference between velocity (affects onset of note only) and volume (can affect sound continuously). Velocity=how loud the instrument is played. Volume=how high the fader is.
Pitchbend: [224+n] (En) + MSB + LSB. "Zero pitch bend" is actually a value of 64 (MSB). Many sequencing programs describe pitchbend as +/-64, but in reality the values are 0-127.
Channel pressure/aftertouch: [208+n] (Dn) + value
Key pressure/polyphonic aftertouch: [160+n] (An) + note number + value
rare: expensive to implement, uses a lot of bandwidth
Early MIDI devices only read one channel at a time, ignored data on other
channels (some, like drum synths, still do). Means you can use different devices
on the same MIDI cable.
Modern synths, called "multitimbral", sort out data by channel, assign to different sounds in the instrument.
The role of mastering: Like finish carpenters: another pair of ears to listen to your music. Use gain, eq, compression to make tracks sound their best, and sound consistent or at least compatible from one track to the next. Best ears in the business are mastering engineerings.
CD mastering in Peak
Open all files, normalize them. Use Playlist to assemble tracks on CD, adjust pauses between tracks (or overlap), burn to CD.
Melodyne, handcrafted pitch correction without artifacts. Can move notes by hand (use option-drag for microtones) or "quantize" them to a chromatica or diatonic scale. Can specify how much correction to apply, and how far to let notes drift (e.g., vibrato).
Digital reverbs: two types:
Simulation: uses discrete delays for early reflections, close decaying delays for tail.
Convolution: record an impulse in a room--pistol shot, balloon breaking, sweep. Measure all the reflections with stereo mics. Build a model, using delay lines, of all of the reflections. Send your signal through that.
Gated reverb: Phil Collins effect: when reverb signal goes below a certain level, it cuts off abruptly.
Plug-ins, formats: application native, VST, AU, TDM, RTAS, MAS, DirectX, etc.
Automatic volume control, keep instrument dynamic ranges limited. Also creative: increase sustain on some sounds like guitar, cymbals, drums, bass. Brings loud sounds down AND, with make-up gain, brings soft sounds up.
When used with a side-chain: ducker. Amplifier listens to an input signal that’s not the main input. Voice announcement system, radio station.
“Brick-wall”, compressor with infinite ratio. Prevents signal from going too high. Used in broadcasting, mastering, wherever signals absolutely can’t exceed a threshold. Applied too thickly makes recordings sound louder without being so, but removes dynamic range. Different from normalizing, in which level is maximized but dynamic range is maintained.
Compressor with negative ratio. Used to increase dynamic range by forcing lower sounds lower. Not used much in recording except as:
Compressor with infinite negative ratio, used to remove low-level sounds completely. E.g. Guitar amp noise. Used on stage for vocal mics, and in studio to isolate drums.
Side-chain: Keyer. Used to make one sound follow another, like noise generator or ambience track following a drum beat.
Equalization: Resonant filter is a sharp cutoff, usually low-pass, that has a peak right before the cutoff point. Used in synthesis a lot. Movable resonant filter: wah-wah pedal.
Delay: single slaps, loops (w/feedback), static comb filter: some frequencies reinforce, some cancel. Feedback deepens notch.
Phasing & flanging: Early phaser used all-pass filter, which introduced phase shift at different frequencies. Changing the corner frequency of the filter changed the frequencies at which phase shift occurred. So you could sweep it with an LFO. Effect was subtle: usually needed to chain several of them--at least four--to create enough effect. Notches in the frequency spectrum were not related to each other.
Flangers. Got their name from early technique: needed 3 tape recorders: source deck, processing deck, record deck. As signal went through proc deck, put a finger on the flange of one reel to slow it down slightly, would create a delay.
Now use digital delays that change over time, with an LFO that sweeps between to delay times. Short delays make filters with more teeth, feedback adds resonance to the point where it can oscillate.
Mixing: Use pan, eq, delays to give each instrument its place in the frequency and spatial spectrum.
Monitoring while recording:
Use the headphone output on the O1V96. The fader positions will affect what you hear, but not what goes onto ProTools.
If you need more than one headphone, or more than one mix, press the Aux 1 button on O1V96, and plug the headphone splitter box into Omni Out 1. Now the faders will set up an Aux mix, and you can monitor it from the splitter box (again, it doesn't affect what goes into ProTools).
To use a click track in ProTools: create an instrument track, and insert MetroTL plug-in. This lets you set up a metronome. To set the tempo in the ProTools session, set the ruler in the edit window to Bars and Beats, and insert tempo changes by control-clicking in the tempo line of the ruler. Connect a TRS-TRS cable from the 003's headphone output to an unused input (9 or higher) on the O1V96 so you can monitor the click.
To overdub in ProTools: Connect a TRS-TRS cable from the 003's headphone output to an unused input (9 or higher) on the O1V96, and you can hear the existing tracks while you create new tracks from inputs 1-8.
Since dry recordings and synthetic sounds have no sense of space.
In ProTools: create Aux track with Bus 1-2 as input, insert Reverb plug-in, add Send (to Bus 1-2) on all tracks you want to apply reverb to. Sends allow different amounts of reverb to be applied to different tracks. Aux channel level is “return”: amount of overall reverb.
Standard model of reverb: Direct sound, pre-delay (first reflection: distance of source from nearest wall), early reflections, tail (RT60). To make vocals clear, use longer (>50ms) pre-delay so that reverb doesn’t overwhelm the track.
Original types: Chambers, Mechanical (plate, spring)
Sometimes use more than one reverb: special reverb on some instruments (e.g., drums), then overall reverb on everything to make it sound as if it’s all in the same space.
29Multitrack Session planning
The ideal: band is well-rehearsed, comfortable with headphones. You have a studio big enough and with enough isolated areas to accommodate them, and enough great mics and inputs to record them all at once, and some room mics too, and yet have good isolation between tracks.
Set a good balance, and record them. Afterwards, record fixes and overdubs as necessary.
Get the final balance in the mix.
The real: You can only do a few tracks at a time.
Recording to multiple tracks
For pop music, start with drums and bass and a guide track, since recording the rhythm section to an existing melody or even guitar part is very difficult (guitarist has to have great timing):.
Guide track can be vocal to get good feel in rhythm section. Record the vocal well—you may keep it.
Click track? Make an instrument track with "TL Metro" inserted, turn on Metronome in ProTools MIDI controls transport window.
If you use a click track and know the tempo, you can use tempo scale to edit with—it's much easier to move things around in bars and beats than in minutes and seconds. But if tempo fluctuates at all, tempo scale will get in the way, and you'll need to construct a conductor track.Set up a cue mix on the 01V96 for live recording and for overdubbing. Hints and suggestions:
Mark places in the room with tape on the floor and/or take pictures, so if you have to come back you can duplicate your setup.Try to monitor as far away from the sound sources as possible. The headphone splitter boxes can be extended using standard mic cables.Be careful handling the mics. Someone left an EV mic in a precarious position in the closet, someone else knocked it over, and now it's damaged, although still usable. Get set up well before your talent shows up. Nothing more frustrating to a musician than to sit around, frustrated, while the tech staff tries to solve some esoteric problem. If they want to practice or noodle in the room, you can't concentrate on what you're doing. But also give them time to warm up before you do an official take. But also record the warmups if you can.
Take breaks. Give yourself and the talent time to breathe, so you don’t get so caught up you don’t realize when something is not right.
Mix Automation. Early systems wrote the data onto a tape track. Later locked to SMPTE timecode, stored in internal computer. Analog consoles use voltage-controlled amplifiers: VCAs. Extra amplification stage, have to be very quiet and change gain smoothly and noiselessly. Dbx made excellent VCAs. Write, read, update. Moving faders. Digital mixers that read MIDI can now use external editor. Virtual mixers do it all in software. Hardware devices like Mackie HUI simulate console surface, actually don’t pass audio: just send and receive MIDI to virtual mixer. Also do transport control.
22Two ways to do EQ:
• Start with a very low Q tuned to the approximate frequency range you think you want to change, and then to very gently boost or cut until you think you’ve got approximately the right tonal color, and then finish off by trimming the bandwidth down to just the width you want.
• Start with a high Q frequency band boosted grossly (even if you ultimately want to cut it) and then sweep across the frequency range until you find the exact frequency that your ear says to change. Once you’ve located it, then start trimming the amount of boost (or start cutting if that’s what you want to do), and at the same time start increasing the Q (increase the bandwidth) until you’ve got just the right timbral quality.I prefer the latter technique for working on individual tracks and the former for more generalized eq on submixes or stereo recordings.Nady Ribbon mics: bi-directional, equal pickup front and back. Do NOT use phantom power. Good for drum overheads if ceiling isn't too low.
Loop recording for multiple takes. Each take is stored as a separate file in bin.
17Analog recording theory/history
Cylinder, wax/lacquer/vinyl discs
78s, 4-5 minutes on a side.
33 (Columbia) and 45 (RCA) came out at the same time, 45 ended up being used for singles.
Stereo discs, cutter head with two coils at 45° angle to vertical. Had to restrict dynamic range or playback stylus might jump out of groove. Had to put bass in the center of the stereo image. Four-step process: master lacquer or acetate (positive), metal master or father or matrix (negative), mother (positive), stamper (negative), disk.To maximize playing time, grooves could change distance between them (“pitch”) so softer passages could be closer together. When tape became the mastering medium, you could put a separate playback head to look ahead and determine pitch.Wire, tape, multitrack tape
Tape recorder basics: tape formulas, heads, transport, bias, noise reduction
= records waveform voltages by aligning magnetic particles or “domains” on tape in step with the changing voltage. "Head" is transducer between AC voltage and fluctuating magnetic field. Tape is iron oxide or similar on plastic. Actual magnet is called "gap". As voltage changes, orientation of particles changes.
High-frequency response limited by size of gap and size of domains: smaller gap means more particles per inch of tape. Finer particles mean more particles per square inch of tape. Also, speed of tape big factor = number of particles per unit of time. Professional tape speeds: 15 and 30 inches per second on 1/4" tape. Consumer (old): 3-3/4 and 7-1/2 ips. Analog cassette: 1-7/8 ips on 1/8" tape.
Dynamic response is limited: inherent noise caused by random orientation of particles ("hiss") means sounds cannot go much lower than noise level. Top end limited by "saturation": if magnetic particles are pushed too much, they resist, and the waveform will be distorted. Under controlled conditions, this can actually add to the "warmth" or immediacy of the sound, but often it just makes it sound nasty.
Dolby noise reduction is a scheme for boosting certain frequencies on record and reducing them on playback to lower noise and increase dynamic range. Best we can ask for in analog tape is about 70 dB of dynamic range. As tape widths and speeds went down, needed it more. Four types: A, B (consumer), C (better consumer), SR. New Dolby formats refer to film and transmission codecs.
Competing system: dbx. Still used in stereo broadcast TV.
Multitrack needed wider tape: 1/2", 1", 2". Fostex and Tascam bucked trend.Equalization
Unless you are trying to eliminate a specific frequency, or boost it, eq is generally used to manage formants, not individual notes. Formants of an instrument remain much the same regardless of what pitch you're playing, and include all noise components and harmonics. Often 200-800 Hz range is where "mud" is. Backing this off on some instruments can make tracks clearer.
To emphasize a bass drum, don't boost at 125 Hz, boost at 2 kHz to bring out transient "ictus".Boosting any part of the spectrum by a large amount will boost the overall level of the signal. You should compenstate with attenuator just before EQ stage to keep any part of the system from overloading. Cutting a part of the signal doesn't affect the level as much. Cutting a small part of the signal is often inaudible, unless you are dealing with a single rogue frequency.
Use judiciously. A) Correct problems--except try to do that at session with mic placement and room treatment. B) Keep instruments from interfering with each other.
10Original digital audio
formats used video tape: restriction was that you could only edit on
video frame boundaries, i.e. every 33 msec. Hard to do musical edits! Modern
systems let you edit with 1-sample resolution.DAT: 2-track tape, very robust and inexpensive. It was killed in the consumer
market by RIAA lobbying for law requiring Serial Copy Management System (SCMS)
chip in consumer units, preventing copies of copies. So no manufacturer took
it into the consumer market, and the medium never caught on. Eventually superseded
by CD burners. Digital disc vs. digital
tapeTape is sequential, disc is random access.
With tape there is a direct correlation between the number of physical inputs, the number of tracks, and the number of physical outputs.
With disc, there is no correlation: inputs and outputs are determined by the audio interfaces, and tracks can be much higher: determined by the speed of the CPU and the throughput of the disc.Second assignment For this assignment, the audio goes from the mixer through S/PDIF (RCA) cable to 003 interface."Scene 02" on Yamaha mixer, sends all input channels to S/PDIF 2-channel digital output.
New template file on ProTools: 2-track S/PDIF input.
Make sure I/O is set up so S/PDIF input in ProTools is assigned to inputs 1-2. Make sure clock is set to S/PDIF (RCA)To test signal path, use UTILITY page on mixer, turn on oscillator.Mic setupsUse the pop filter on vocals--always!Single mics on most instruments. First, place the instruments in the room so that they sound good. Then place the mics where they sound good.
Observe the 3-to-1 rule: Mic must be at least three times further away from something it’s not picking up than what it is picking up. Sometimes phase switch on mixer (press Ø/insert/delay button to bring up phase switches on screen) can help with leakage problems.
Use stereo micing (pattern of your choice) for piano, drums, marimba, large instruments. Also consider using stereo room mics as well.
Musicians should balance
themselves before you set balances.They must able to hear each other. You
will mix in the mixer, recording the mixer's output to two tracks. All channels except kick
drum or bass: Set Yamaha low-band Q all the way to the left to "HPF”,
roll off below 71 Hz. Eliminates room noise.
Live mixing: Pan individual instruments where you want them. They should be more or less in the same place as they are in the room. If that's not possible (instruments are against two walls or in a circle around the mics) then adjust intelligently.
Soloing: in place (meaning in their pan position). Use to analyze individual mics. Only affects monitor and headphone outputs, doesn't affect stereo bus, so you can do it during a take.
8Aliasing: caused by trying to sample frequencies above Nyquist frequency (sampling
rate/2). Non-harmonic distortion. Dither: noise injected
so that lowest bit is never dropped. Originally used broadband white noise,
then they found you could use inaudible high-frequency noise to the same
effect: noise shaping.
D-A converter: creates signal voltages from samples; uses sharp filters (decimation) to round off the edges of the waveforms.Digital recording formats
Stereo: Sony PCM-F1, PCM-1610: used video tape, either 1/2" or 3/4" -- could only edit on frame boundaries, 33.3 msec resolution.
DAT: was killed in the consumer market by RIAA lobbying for law requiring SCMS chip in consumer units, so no one made any.
CD-ROM, CD recorders, hard-disk.
Multitrack: Mitsubishi, Studer, Tascam, Sony PCM-3324 and -3348. Most of them long gone. Replaced by ADAT, and to some extent by Tascam DA-88 (didn't do as well: price point was higher and introduction was a few months later).
ADATs could easily be combined, and controlled by a single controller which acted as if it was a 32-track deck. But ADATs broke down a lot. High-end formats: HDCD, SACD, DVD-A. Transmitting digital audioStereo: AES/EBU = AES Type I Balanced – 3-conductor, 110-ohm twisted pair cabling with an XLR connector, 5 volt signal level
S/PDIF = AES Type II Unbalanced – 2-conductor, 75-ohm coaxial cable with an RCA connector, used in consumer audio, 0.5v
The two data sets are almost identical. You can easily convert from one to the other with a simple voltage gain or drop.
TOSLINK = AES Type II Optical – optical fiber, usually plastic but occasionally glass, with an F05 connector.Multitrack: ADAT Lightpipe, 8 channels on optical fiber. Same cables as TOSLINK, but not compatible
Tascam TDIF, 8 channels on DB25 connector, same as original SCSI spec.
MADI coaxial (BNC connnector) or optical (wider than TOSLINK), 48 or more channels, used in older multitrack decks and high-end installations. Making a bit of a comeback with multichannel digital consoles. ClockingWhen using multiple digital sources, they must have a common clock, or else there will be clicks where the clocks are out of sync and samples are dropped. So there must always be one master.
Word clock signal can be generated by one device, and fed through the others, or fanned out to the others.
Or, if all devices are capable of syncing to incoming digital audio stream, you can daisy-chain them.
Tom says: Master clock when you're recording should be the device that is doing the analog-to-digital conversion, so that jitter is minimized.
Take a sample of the signal voltage and write it down as a number.
Issues: how often (sample rate), how accurate is the number (word length), how accurate is the sample clock (jitter).
Analog-to-Digial (A-to-D) converter does this.
Nyquist theorem: highest frequency sampleable is 1/2 the sampling rate. If you go too high, you get aliasing.
Word length: number is binary, so number of bits determine the range. With 10 bits, you get 0-1023. With 16, 0-65335. Difference between analog input and the digitized signal is called Quantization noise.
Dynamic range of a digital audio system in dB = highest level possible/quantization noise level
= (6.02 * number of bits) + 1.76. Usually approximated to (# of bits * 6). So 16-bit system has theoretical dynamic range of 96-98 dB.
1 Pro Tools:
Grouping tracks. Regions. Editing modes (slip/shuffle). Adjusting levels.
Analog, digitally-controlled analog, digital, virtual
Studio layout: control room, live room, iso booth
Studio wiring: input panels, monitor outputs, cue systems
• Input selector—mic and tape inputs usually hard-wired through patchbay (normalling)
Mic and line inputs, balanced
Mic level: ~2 mv
Line levels: -10 dBV= 0.316 V or +4 dBu=1.2276 V
0 dBV=1 V RMS without impedance reference (usually high)
0 dBu=0.775 V RMS (corresponds to dBm, which is across 600Ω load)
Pro consoles usually +4, semi-pro -10 or switchable.
• Input trim for adapting to input level, pad (-20dB)
Important to have all amplifier stages operating in optimum range! Avoid noise pickup and distortion: “Proper gain staging”
• Mic preamps (virtual consoles use outboard interface including mic preamps and A-to-D converters)
Some prefer outboard mic preamps to built-in preamps; convert to line level or to digital.
• Hi and low cut filters, for room noise, hiss, sibilance, mic proximity effect
• EQ, simple or parametric, in/out switch
• Compressor/Gate. Smoothing out levels on vocals, basses, drums; gate for isolating drums, other track leakage.
• Output assignments: to tape or hard-disk tracks. Select an output bus or “direct”. Output buses can also be used for sub-mixes: e.g., group all the drums to two faders.
• Aux sends for processing. Aux buses allow multiple tracks to go through a single processor, so all tracks get same reverb for example, but you can adjust how much reverb is added to each track. Pre/Post-fader switch=usually set to post-fader so effects go away when track is faded out.
• Aux sends for Cue: musicians on headphones. Pre/Post switch=set to pre-fader so cue mix is independent of main mix
• Monitor mix: control room, studio playback. Solo button isolates source in monitors, but doesn’t change assignments or mix. “Solo in place” keeps pan position, otherwise comes up in mono.
your mic pattern:
use the results of last week’s experiment.
How much “center” do you need? Is tonal balance or spatial placement
more important. If you use M-S, when you are playing back, duplicate the Side channel onto
another track. Phase-reverse it using using Short Delay plug-in set to 0% delay
and 0% mix, and then group both Side faders. Level of side faders determines
width of stereo image.Bouncing to AIFF:
to Disk. 16-bit, 44.1 kHz, interleaved. Mix is done in real time. Mix will
be stored in Audio Folder of session.Speakers:
Of all the components in an audio system, these have by far the worst frequency response and distortion. Physics of moving air is difficult. The perfect speaker would weigh nothing and have infinite rigidity. The spider which holds the cone against the magnet would weigh nothing and have infinite flexibility. The space inside the cabinet would be infinite so that nothing impedes the movement of the cone.
Break up the spectrum into components that work best over a linited range: Woofers, tweeters, midrange, Sub-woofers.
Directivity: low frequencies spread out more, high frequencies are localized, “beamed”.
Time-aligned: tweeter is delayed or set back to compensate for depth of woofer cone. Theory says this preserves transients, prevents phase interference between drivers at overlapping frequencies.
Concentric drivers sometimes used for time/space alignment.
Passive vs. active speakers: where does crossover go? Bi-amplification.
Sensitivity: output SPL per watt input.
Other specs: freq response, THD, maximum power, often miselading.
Damage: causing woofer cone to go too far can tear it or pull it off its mount. Sending high-frequency distortion products to tweeter can damage it.
Near-field: small speakers up close to minimize room effects.
How to use speakers in practical situations? Get used to them! Listen to music that you know on them, so your ear can make comparisons.
In a studio, use multiple speakers to monitor recording and especially mix: high-end and low-end. Auratones, Yamaha NS-10s popular for simulating home hi-fi, television, car. Tissue paper in front of the NS-10 tweeter?Power Amplifiers: matching to speakers, impedance (= resistance at audio frequencies, in ohms), damping factor: ratio of speaker impedance to source impedance. How well it controls mechanical resonances: high damping factor acts as a "brake" on the cone; low damping factor means it can ring. So you want output impedance low (typically 0-1Ω), speaker impedance high (8Ω down to 2Ω).
Many amplifier manufacturers state power levels going into a low-impedance load, makes them look more powerful. Headphones: open (foam), closed (Koss), semi-closed (lighter plastic), noise-cancelling (Bose).
Can be more accurate, move much less air so elements are lighter, no room effects.
Problem: interaural bleed is gone, so stereo image is very different from speakers. Processors beginning to appear that simulate speakers in headphones. Ear buds: no isolation, low dynamic range, less low-freq response. Getting better! Watch out for exaggerated LF response. Watch SPL!!
In-ear monitors: Isolated, advantage is less sound on stage getting into FOH system. For bass players and drummers, often combined with speakers or throne drivers, e.g. “Buttkicker”
25ProTools on the recording cart
Simple Yamaha 01V mixer controls: Mic trims, panning, setting levels, phantom power. Get a "green " signal without a red light. Level should top out between -12 and -18.
Fader positions don’t matter!
Opening and logging into MacBook Pro.
Recording into ProTools. 44.1 kHz, 16-bit. Create a folder with your names on it on the laptop EXTERNAL drive, put session in there.Move entire folder onto lab computer with a flash drive. Mics: Two audioTechnica multipattern and two Electro-Voice n/d267a dynamic cardioids with stands and cables are in the closet.You can use Fisher (but not when there’s a musical event going on in Distler), room 24, 27 (only when 24 is booked by someone else), 155, 251, or 271. Reserve the room and the recording cart with the music office at least 24 hours ahead of time. When you are ready to go into the room, find the practice room monitor to open the closet and the room for you.Go through the mixer channels 1 & 2-- comes up in ProTools as inputs 1 and 2.
Do not use any eq or processing. Try different instrument and mic positions. Edit if you need to. Goal is to make something that sounds realistic, and good. Try different instrument and mic positions.
Anthony / Kip / Michael Ferdico
Neil / Shayne / DJ
Michael Nuzzolo / Case / Jasper
Sarah / Henning / Nick
18Micing Good positioning is always
better than trying to fix later. Good positioning means phasing is favorable:
hard to fix with eq!Mics need to be closer
than our ears, since we don't have the visual cues to tell us what to look
for, and mics can't distinguish between direct and reflected sound. We always
want more direct sound in the recording. Can add reflections (echo/reverb)
later, but impossible to remove them.Listening to the instruments
in the space: finding the right spot to record. Get the room balance in your
ear, then take two steps forward and put the mic there.3-to-1 rule: when using multiple microphones, mics need to be at least three
times as far away from each other as they are from their individual sources.Winds & Strings:
at least 3 ft away from source if possible, except when it would violate
String sections: mic in stereo as an ensemble, not meant to be a bunch of soloists.
Horn sections, can go either way: mic individually or if there is enough isolation from other instruments, as section. Guitar: exception since we are used to hearing close-miked guitars. But there is no one good spot on the guitar, since sound comes from all over the instrument: soundhole (too boomy by itself), body, top, neck, headstock. Best to use 2 mics, or if room is quiet, from a distance.Piano: exception since pianists like the sound of the instrument close up--doesn’t really need the room to expand. Different philosophies for pop and classical. 3:1 rule on soundboard, or even better, 5:1 since reflections are very loud and phase relationships very complex. Can used spaced cardioids, spaced omnis, or coincident cardioids, in which case you want to reposition them for the best balance within the instrument (bass/treble).Drums: first of all, make them sound good! Tune them, dampen rattles, dampen heads so they don’t ring as much (blanket in kick drum).
Three philosophies--Choice will depend on spill, room sound, and how much power and immediacy you want in the drums.
1) stereo pair overhead (cardioid or omni); good for jazz, if you don’t mind some spill, or if they’re in a good-sounding isolation room.
2) add kick (dynamic or high-level condensor) and snare mics for extra punch and flexibility
3) add mics to everything. Complicates things because of spill, may have to add noise gates (isolating individual drums) later.
Mic techniques for stereo: none of them are perfect! XY, ORTF, DIN, NOS, MS, spaced omni, spaced cardioid, Decca tree
12Boundary mics (pressure
Owned by Crown. Mic element is very close to wall. Hemispherical pickup, reflections off of wall are very short, essentially non-existent, prevents comb-filtering caused by usual reflections, even frequency response. Not good for singing, but good for grand piano (against soundboard), conference rooms, theatrical (put on the stage, pad against foot noises).How pickup patterns are designed into mics:
Omni, picks up only from the front. Slight shadow effect on sound from the rear, but otherwise truly omni at all frequencies.
Cardioid. Sound from rear arrives at rear before it arrives at the front. So this mic has ports that feed sound from the rear with a slight delay, created by labyrinth or material that slows down the sound, so that sound arrives at front and back of the diaphragm at the same time, cancelling each other out. Sounds arriving from the front cause the greatest difference in pressure between front and back; sounds from sides and rear least pressure.
Figure 8 has one diaphragm open to both sides. Sounds from side arrive on both sides at the same time, cancel each other out.
Hypercardioid, supercardioid, shotgun: variations using different types of ports.Frequency response changes off axis, not linear. Causes coloration in off-axis instruments. When using multiple mics in a setup, have to be careful with this. Experiment: need powered speaker hooked up to another computer (1/8’ cable), generating complex sound, microphone on stand hooked up to Mbox 2 (XLR cable), feeding through spectrum analyzer. Move it around. Proximity effect: Proximity effect: Sound entering a microphone has two components: phase difference and amplitude. A directional mic blocks sound from the rear and sides. Sound from the front goes to the front of the diaphragm. Sound from the rear goes to the front and through a labyrinth to the rear. Difference of arrival time/phase moves the diaphragm. As frequency rises, phase difference goes up—fixed arrival time means that it becomes a greater proportion of the waveform. Slope is 6 dB/octave. Electronics built in to compensate: bring down high end @ 6 dB/octave.
Omnidirectional mics don’t block sound from any angle, so the frequency response is flat—don’t need compensation!
Amplitude is actual air pushing against the diaphragm. At distances, because of the inverse square law, the effect on the diaphragm is negligible. But as distance increases, amplitude differences between front and back get higher, and at very short distance, amplitude component overwhelms the phase component. The amplitude component does not change with frequency—result is that the highs are attenuated but the lows are not. Hence, bass boost.Microphone techniques: respect the historical use of instruments!
for Vocals: pop filters, monitors
for Piano: stereo image?
for strings: not on top of the bridge. Too close, loses resonance and high frequencies Impedance: electrical characteristic, has to be matched to effect efficient energy transfer. In audio, best to have a low-impedance source feeding a high-impedance input. If impedances are not matched correctly, signal reflects back along the line, causing loss. At high frequencies (above 1 MHz) this also causes frequency anomalies, but not at audio. Cables: Balanced vs. Unbalanced:
Balanced = two conductor and surrounding shield or ground. Two conductors are in electrical opposition to each other — when one has positive voltage the other has negative. At receiving end, one leg is flipped in polarity—also called phase—and the two are added. If noise is introduced, it affects each conductor the same. If you flip any signal and add it to itself, the result is zero. Because it is flipped at the receiving end, the noise cancels out. This means there is little noise over long lengths of cable. Best for microphones, which have low signal levels, but also for long lengths of line level.
Unbalanced = single conductor and shield. Cheaper and easier to wire, but open to noise as well as signal loss over long length, particularly high frequencies due to capacitance (of interest to EEs only). Okay for line-level signals over short distances (like hi-fi rigs or electronic instruments), or microphones over very short distances (cheap recorders and PA systems).
Connectors: Balanced: XLR (as on microphone cable), 1/4” tip-ring-sleeve.
Unbalanced: RCA (“phono”), 1/4” (“phone”), mini (cassette deck or computer).
Mini comes in stereo version also (tip-ring-sleeve), for computers and Walkman headphones (both channels share a common ground). 1/4” TRS is also used as a stereo cable for headphones = two unbalanced channels with a common ground.Guitar pickups = two kinds: piezo (mechanical vibration to electric current) and magnetic (metal vibration to electric current using fixed magnetic field: humbucker is special type)
DI boxes = transformers to match level and impedance of instrument to that of mic input on console.
10Phase: where in
the waveform you are at any moment. Hearing absolute phase is difficult,
but relative phase between two signals is easy. Localization in the human
hearing system uses amplitude, time (phase), and frequency response. It is
especially sensitive to phase: interaural effect. It tells us quickly about
the location of a sound as it arrives at our two ears, along with relative
amplitude. Head acts as bafffle for
high frequencies, so relative amplitude used more for localization at high
frequencies. Low frequencies bend around the head, so phase is more important.
(It’s why you need only one channel of subwoofer in a surround system.) When waves coincide, energy
is increased. When waves are in opposition, they cancel each other. If you
take two identical complex signals and delay one, which happens in a reflection
in a room or the pinna (earlobe), the various harmonics will cancel or reinforce
each other based on the delay period. This is comb filtering. Also,
tiny reflections within the pinna clue us in on directionality, because spectrum
changes.It means that the frequency
spectrum of what you’re hearing changes if you move, even very slightly.
So turning your head to localize a sound changes the phase, the timing, and
the spectrum. We learn how to use this very well early in life.NIH study: People who lose
their earlobes and are given new ones have a lot of trouble re-learning how
to localize sounds.If you change the delay
period, the phase cancellations move, creating the phasing or flanging effect.
Sound isn't moving, but seems like it should be. The role of the room: standing
waves or “room modes” caused by phase reinforcements due to the
reflections in the room. Based on dimensions of the room. The more reflective
the walls, the greater the problem. Lots of techniques for minimizing these
including absorbers, diffusors, "traps".More a problem at low frequencies,
since specific frequencies stand out. At higher frequencies, they blend together,
not nearly as obvious.Three types of room modes:
Axial, tangential (-3 dB. 1/2 level), oblique (-6dB, 1/4 level)Calculate them with this
utility: http://www.mcsquared.com/modecalc.htmEffects of speaker placement
on frequency response: in a corner or against a wall, bass is emphasized.
Some speakers are designed to go in corners--their low-end response is tailored
to compensate.Transducer = converts
one type of energy to another
Microphone = converts sound waves in air to Alternating Current (AC) voltages. Dynamic Microphone has a magnetic metal diaphragm mounted inside a coil of wire. Diaphragm vibrates with sound waves, induces current into coil, which is analog (stress the term!) of sound wave. This travels down a wire as an alternating current: positive voltage with compression, negative voltage with rarefaction.
Dynamic/moving coil (pressure-gradient mic)
Condensor/capacitor=charged back plate + diaphragm acts as capacitor, one plate moves, capacitance changes.
Charge comes from battery, or permanently-charged plate (electret), or dedicated power supply (old tube mics), or phantom power: 48v DC provided by mixer (doesn’t get into signal, because input transformer removes it).
Ribbon (velocity mic)= Metal ribbon is suspended between strong magnets, as it vibrates it generates a small current. High sensitivity, good freq response, a little delicate, figure-8 pattern.
Pickup patterns: Omndirectional, Cardioid, Figure 8 (bi-directional), Hypercardioid, Shotgun.
5 Characteristics of a
Frequency in Hz: how many vibrations or changes in pressure per second.
Loudness in dB SPL: how much air is displaced by the pressure wave.
Timbre = complexity of waveform, number and strength of harmonics. We can change timbre with filters or equalizers.Waveforms = simple and complex
Simple waveform is a sine wave, has just the fundamental frequency. Other forms have harmonics, which are integer multiples of the fundamental. Fourier analysis theory says that any complex waveform can be broken down into a series of sine waves.
Saw: each harmonic at level 1/n. Square, only odd harmonics at 1/n. Triangle, only odd harmonics at 1/n2
If there are lots of non-harmonic components, we hear it as noise.
White noise: equal energy per cycle (arithmetic scale)
Pink noise: equal energy per octave (logarithmic scale-more suited for ears)Stereo = since we have two ears. Simplest and best high-fidelity system is walking around with two mics clipped to your ears, and then listening over headphones: this is called binaural. Binaural recordings are commercially available: they use a dummy head with microphones in the earholes.
Systems with speakers are an approximation of stereo. The stereo field is the area between the speakers, and the “image” is what appears between the two speakers. If you sit too far from the center, you won’t hear a stereo image.
Multi-channel surround can do more to simulate "real" environments. Quad, 5.1 (.1=LFE since low frequencies are heard less directionally), 7.1, 10.1, etc. Will do a little with it in this course.
Position in the stereo or surround field = L/R, F/B, U/D. Determined by relative amplitude, arrival time, and phase.Perception: dynamic and frequency range of human hearing
Ear converts sound waves to nerve impulses.
Each hair or cilium responds to a certain frequency. As we get older, hairs stiffen, break off, and high-frequency sensitivity goes down. Also can be broken by prolonged or repeated exposure to loud sound.MIT researchers have found a new mechanism that involves two-dimensional sensitivity in the ear that may explain some things we don’t understand yetHow frequency sensitivity changes at different loudness levels: at low levels, we hear low frequencies poorly, and high frequencies too, although the effect isn’t as dramatic.
Fletcher-Munson curve: ear is more sensitive to midrange frequencies at low levels, less sensitive to lows and extreme highs. In other words, the frequency response of the ear changes depending on the volume or intensity of the sound. When you monitor a recording loud, it sounds different (better?) than when soft.Using filters/eq to change frequency response. graphic, parametric, High Pass, Low Pass, BandPass, Notch
EQ used to solve problems, and to be creative.The smallest difference we can hear in a level of sound--the “Just Noticeable Difference (JND)”--is 1 dB. This changes with frequency and loudness level. We can often hear much smaller differences under some conditions, and not hear larger ones under different conditions. Also, JND changes with duration--short sounds (<a few tenths of a second) seem softer than long sounds of the same intensity. Haas effect: precedence of first-arriving signal. <35 ms later, second sound is blended. 35<50 ms, second sound is heard as ambience. >50 ms, distinct sounds. Lower values with transient sounds like drums.
• Bandwidth limitations
• Frequency response anomalies=like a filter or eq
• Dynamic range limitiations
• Distortion caused by clipping or non-linearity: adds odd harmonics, particularly nasty (show in Reason)=harmonic distortion
• Crossover distortion= certain types of amplifiers, where different power supplies work on the negative and positive parts of the signal (“push-pull”). If they’re not balanced perfectly, you get a glitch when the signal swings from + to - and vice versa.
• Intermodulation distortion=frequencies interacting with each other.
• Noise, hum, extraneous signals
Basic audio principles:
Nature of Sound waves = pressure waves through a medium = compression (more molecules per cubic inch) and rarefaction (fewer molecules per cubic inch) of air. A vibrating object sets the waves in motion, your ear decodes them. Sound also travels through other media, like water and metal. No sound in a vacuum, because there’s nothing to carry it.
Speed of sound in air: about 1100 feet per second. That’s why you count seconds after a lightning strike to see how far the lightning is: 5 seconds = one mile. Conversely, 1 millsecond = about 1 foot.
Sound travels a little faster in warmer air, about 0.1% per degree F, and in a more solid medium: in water, 4000-5000+ fps, in metal, 9500-16000 fps.
When we turn sound into electricity, the electrical waveform represents the pressure wave in the form of alternating current. The electrical waveform is therefore an analog of the sound wave, Electricity travels at close to the speed of light, much faster than sound, so transmission of audio in electrical form is instantaneous.
Characteristics of a sound:
Frequency = pitch, expressed in cycles per second, or Hertz (Hz).
The mathematical basis of the musical scale: go up an octave = 2x the frequency.
Each half-step is the twelfth root of 2 higher than the one below it. = approx. 1.063 The limits of human hearing = approximately 20 Hz to 20,000 Hz or 20 k(ilo)Hz. Fundamentals vs. harmonics = fundamental pitch is predominant pitch, harmonics are multiples (sometimes not exactly even) of the fundamental, that give the sound character, or timbre.
Period = 1/frequency
Wavelength = velocity of sound in units per second/frequency
Loudness (volume, amplitude) = measured in decibels (dB) above threshold of audibility (look at chart). The decibel is actually a ratio, not an absolute, and when you use it to state an absolute value, you need a reference. “dB SPL” (as in chart in course pack) is also referenced to the perception threshold of human hearing. Obviously subjective, so set at 0.0002 dyne/cm2, or 0.00002 Newtons/m2. That is called 0 dB SPL. By contrast, atmospheric pressure is 100,000 Newtons/m2
dB often used to denote a change in level. A minimum perceptible change in loudness is about 1 dB. Something we hear as being twice as loud is about 10 dB louder. So we talk about “3 dB higher level on the drums” in a mix, or a “96 dB signal-to noise-ratio” as being the difference between the highest volume a system is capable of and the residual noise it generates.
“dBV” is referenced to something, so it is an absolute measurement. “0 dBV” means a signal referenced to a specific electrical voltage in a wire, which is 1 volt. “0 dBu” is referenced to 0.775 volts, but it also specifies an impedance of 600 ohms. We’ll deal with impedance later. Common signal levels in audio are referenced to that: -10 dBV (consumer gear), +4 dBu (pro gear)
The threshold of pain is about 130 dB SPL, so the total volume or “dynamic” range of human hearing is about 130 dB.
Waveforms = simple and complex
Simple waveform is a sine wave, has just the fundamental frequency. Other forms have harmonics, which are integer multiples of the fundamental.
Timbre = complexity of waveform, number and strength of harmonics.
©2013 Paul D. Lehrman, all rights reserved