Digital Audio Workstations or DAW Part 4

This is the fourth in a series of blog posts outlining some of what we know about optimal music recording.  When reading through these posts one should be struck that we really know everything we need to know already when it comes to recording.  In some cases, what was learned in our first year audiology class provided  “obvious” recording cues (such as ensuring a wide bandwidth), while in other cases, the information and concepts have been learned but may have applied to fields other than music recording.  Today’s post is a great example of this.

The fourth question concerns the mechanism to control, edit, and play back whatever it was that we wanted to record or modify.  These are called digital audio workstations or DAWs.  Previous blog entries focused on where to place the microphone, the various types of microphones, and even the sexy topic of microphone/cable connectors.  This entry concerns the device (and some of its colorful history) that can record, modify, and create an output of the speech or music.  In my personal experience since my past was in the realm of phonetics and speech sciences, I was more interested in techniques to make the perfect recording of speech sounds. Even today I can’t really leave that field.  One of my car license plates is “PHONETIC” (and the other is “PHONEMIC”).  I don’t really know what is more nerdy- having these as license plates or being so surprised that nobody had taken them before me!

A digital audio workstation (DAW) is an electronic system designed for recording, editing and playing back digital audio. DAWs have actually been in existence since I was in grad school (for my first time while doing a masters in linguistics) in the late 1970s.  I recall using a PDP computer system with a very rudimentary tape drive system.  We had to boot them up with a series of binary toggle switches and it took about 30 minutes to complete the manual booting-up cycle.  If I booted it up correctly, I could record a speech sample, and measure some of its characteristics such as pitch changes, FFT spectral response, and some temporal characteristics.  A thesis that took over a year to write could be done today in 20 minutes with a modern DAW.

DAWs work in conjunction with other hardware such as microphones, and external sound card/input systems.  They can  take actual sound input from a microphone, stored files from a computer, or even MIDI inputs from a keyboard.  The outputs can be in the form of a digital file (e.g. .wav or .MP3), or through loudspeakers.

In the “olden days,” DAWs were actually physical workstations with a large console and slider switches that a sound engineer would manipulate.  If there was any visual data that was supplied it usually was in the form of a graphic equalizer.  As computers came into their own, with sufficient computing power, speed, and storage capabilities, they gradually replaced the large console-like devices.  The computer screens became replicas of the older external devices and today one might see an actual screen shot where movable old-style sliders are drawn with great graphics.  In many cases, these on-screen sliders can be controlled with other subroutines and programs running in the background. Starting to sound familiar?… read on…

A typical screen shot may consist of the time waveform, an FFT or LPC spectral analysis, equalizers that can change the frequency response, compressors and expanders that can alter the output depending on its physical intensity and frequency, and even settings for attack time and release time.

Wait a minute!  I have just described a typical NOAH screen with the ability to control hearing aid software.

Yes, indeed, audiologists are users of DAWs.  One way that I look at a hearing aid fitting is that I am a producer and sound engineer for a hard-of-hearing person, whereas a typical sound engineer or music composer is a producer for normal-hearing people.  The research that underlies how a sound engineer adjusts the setting of their DAW is similar to the audiological research found in our field.

I suspect that we can learn significantly from the work of our recording engineering colleagues (and conversely they can  learn much from us as well).

About Marshall Chasin

Marshall Chasin, AuD, is a clinical and research audiologist who has a special interest in the prevention of hearing loss for musicians, as well as the treatment of those who have hearing loss. I have other special interests such as clarinet and karate, but those may come out in the blog over time.