Synthesizers 4

Going Digital

It’s not an accident that most of the synthesizers on the market today are digital. It’s also not because their technology is necessarily superior to the sound of analog, Rhea says (though he acknowledges that digital synthesizers can produce a more stable pitch). Instead, it was simple economics: Digital instruments could be manufactured and sold more cheaply. “The average musician playing at a Ramada Inn is not going to buy something that costs $15,000,” Rhea says.

The technology on the circuit board of a digital synthesizer is a significant departure from its analog predecessors. Digital synthesizers use processors and algorithms that have been programmed into the devices to interpret strings of binary numbers, which are then translated into sound waves. Research and experimentation in digital music had been going on since 1957, when Max Mathews of AT&T’s Bell Laboratories wrote Music I, the first computer program to play a piece of music [source: Schofield]. The first wave of commercial digital synthesizers arrived in the 1980s, with the Yamaha DX7, released in 1983, becoming an early bestseller. Subsequent generations of digital synthesizers became indispensable for producers and musicians creating hip-hop, pop, rock and electronic music. Composers regularly use synthesizers to score films, whether it’s an aural sketch intended to be filled out with live instruments later on or a soundscape rendered completely by synthesizer.

Over time, digital synthesizers have also branched into different forms — some are add-on devices that would link physically to desktop computers, for example, while others are software programs that relied on a computer’s hardware to perform all of its functions. (A so-called virtual analog synthesizer incorporates the knobs, dials and keyboard of an analog synthesizer in its interface while using digital technology to drive all of its operations.) Soon enough, digital synthesizers had other devices to keep them company: MIDI (musical instrument digital interface), a protocol introduced in 1983, links synthesizers with sequencers, samplers, digital audio workstations like Avid’s Pro Tools and Apple’s Logic, drum machines, and all order of electronic music devices and software.

Synthesizers have helped put the ability to make music into the hands of anyone with the inclination. That means there are fewer barriers to making music than ever before, but it also means that your neighbor with a voice that sounds like broken glass can create songs as easily as a Julliard-trained vocalist. And that’s the upshot of synthesizers, Rhea says: “the democratization of music, with the concomitant horrors and wonders”.

Synthesizers 3

Synthesizer Components

Even though many synthesizers possess the ebony and ivory keyboard of a piano, the rest of the machine — a chassis lined with knobs, dials and switches — looks more like it belongs in a garage instead of a concert hall. Nonetheless, the synthesizer contains the same two components as almost any other instrument: a generator and a resonator. Think of a violin, for example: the strings and the bow are the generator, and the body of the violin is the resonator. On a synthesizer, the generator is the osillator, and the resonator is the filter.

For starters, let’s look at the basic parts of a classic analog synthesizer. (We’ll talk about digital synthesizers later.) Analog synthesizers generate their sounds by manipulating electric voltages. The oscillator shapes the voltage to produce a steady pitch at a given frequency, which determines the basic waveform that will be processed elsewhere in the synthesizer. The oscillator can be controlled by the keys similar to a piano keyboard, a revolving pitch wheel or another tool on the synthesizer’s interface. The oscillator feeds the signal to the filter, and the musician turns knobs and dials to set parameters around the frequencies of a sound — for instance, eliminating and emphasizing specific frequencies like we talked about earlier. The sound passes from the filter to the amplifier, which controls the volume of the sound. The amplifier generally includes a series of envelope controls, which help determine the nuances in volume level over the lifespan of a note.

In an analog synthesizer, each of these pitch, tone color and loudness functions is organized into a module, or a unit intended for a specialized purpose. The earliest modules were encased in their own individual housings. Each module creates a particular signal, or processes it in a particular way, and by connecting these modules together, the musician can layer, process and change the sounds into something different.

Now that we know about how synthesizers work, let’s look back at their history.

Early History of Synthesizers

When was the first synthesizer invented? That depends on who you ask.

Many point to the Telharmonium or the theremin — instruments invented in the late 1890s and in 1919, respectively — but Rhea disputes their inclusion: Nothing on these instruments grants the operator complete control over the constituent elements of sound. The first instrument to fit the criteria was a hybrid of piano and electronic technologies invented in France in 1929 by Armand Givelet and Eduard Coupleaux, which used a paper tape reader and devices that manipulated elements of electronic circuitry in order to create an orchestra of four voices. The first time the word “synthesizer” was used to describe an instrument came with the 1956 release of the RCA Electronic Music Synthesizer Mark I, which used tuning forks and information punched onto a roll of paper tape to play music through a set of loudspeakers

Robert Moog is generally considered the father of the modern synthesizer. Moog, an American, was an electrical engineer who dabbled in building electronic instruments like theremins. In the early 1960s, after befriending the musician Herbert Deutsch, Moog set about inventing the first commercially available synthesizer. Released in 1964, Moog’s 900 Series Modular Systems resembled towering mainframe computers with a spiderweb of cables that “patched” the various modules together to create a complete sound. The sounds could be sequenced as well as played in real time.

Initially marketed toward academics and experimental musicians, these synthesizers were polarizing instruments early on. “As a salesperson, I went into music stores where I was practically thrown out, and I was told that [the synthesizer] wouldn’t be a musical instrument,” says Rhea, who spent many years working alongside Moog in many different capacities. But in 1968, the Grammy-winning album “Switched-On Bach” by Wendy Carlos exposed the musical possibilities of synthesizers to a broader audience. In subsequent years, groups like Parliament-Funkadelic, the Mahavishnu Orchestra and Emerson, Lake, and Palmer began incorporating synthesizers into their music. The advent of the Minimoog, which consolidated elements of the large devices into a single, portable instrument that was less expensive, put 13,000 synthesizers into the hands of performing musicians during its production life. Even after the advent of digital synthesizers, musicians continue to honor Moog and his creations at the annual Moogfest in Asheville, N.C..

Synthesizers 2

Synthesizing the Elements of Sound

When we say that synthesizers manipulate the fundamental elements of a sound, what do we really mean?

First, here are a few basics. A sound is the result of changes in air pressure as energy travels from a sound’s source to our ears. The human ear can process sounds in a frequency range from 20 to 20,000 hertz, and we perceive every sound to have a different pitch, timbre (or tonal quality) and loudness. Even if two instruments play the same musical note, the measurable characteristics of each sound — like frequency (number of repetitions of the wave in one second), amplitude (volume, or the change in air pressure), wavelength (the distance between cycles of a waveform) and period (the time it takes for a waveform to repeat a full cycle) — can vary dramatically. Sounds also contain harmonics, or layers of frequencies that combine to make a full, complex voice. Finally, there are the changes in volume that take place over the lifespan of a sound. This process, which encompasses the peak volume once the note is struck all the way through its inevitable dissolution, is described as attack, decay, sustain and release (ADSR).

We mentioned that the word synthesizer derives from “synthesis.” There are many types of synthesis, but let’s talk about the process of subtractive synthesis, one of the most commonly used forms when creating sounds with a synthesizer. In short, a musician begins with a waveform — a sound which contains all of the characteristics mentioned above — and subtracts components until the desired tone is achieved. The musician can adjust the settings of the synthesizer to strip away and silence certain frequencies, or emphasize and heighten others. In the end, subtractive synthesis can change that initial waveform to become a much different sound. Once it exits the synthesizer, the sound can have similar qualities to a trumpet, a snare drum, an atmospheric whoosh or virtually anything else. (However, unless you use a sampler — an electronic instrument that records and processes pieces of acoustically generated sound — no synthesized version of a real-world instrument will ever be an exact copy.)

Now that we know about how a synthesizer manipulates sounds, let’s go under the hood and take a look at the components of a synthesizer.

Synthesizers 1

Synthesizers 1.

When we think of synthesizers, it’s tough to set aside our preconceptions. We might think of fuzzy, 8-bit video game soundtracks and impossibly catchy pop songs from the 1980s. We might think of a lush, digital orchestra emanating from a keyboard. We envision knobs and dials and cables strewn all about. We might even think of a software program we control with our computer’s keyboard.
Whatever we imagine, in the nearly 50 years since the debut of the first commercial synthesizer, its impact has probably gone deeper than any of us even realize. The sounds generated by synthesizers are part and parcel of the music that bounces around in our eardrums: pop music, hip hop, film scores, even rock and roll. But Dr. Tom Rhea, a professor in the Electronic Production and Design Department at the Berklee College of Music, says that synthesizers have offered a very significant contribution to the way music is played beyond the sounds they create: Form no longer has to follow function. Acoustic instruments like guitars, cymbals, and clarinets have to be built a certain way to get a very particular sound. “With electronic instruments — namely the synthesizer — all that is out the window,” Rhea says.
In other words, a synthesizer can make more than one type of sound. It can create tones that are both familiar and otherworldly — a flute, an ocean swell or a Martian’s ray gun — as well as voices that have never been conceived of. Synthesizers achieve these ends by manipulating and combining the fundamental qualities of a sound to create something new. Contrary to popular misconception, the word “synthesizer” is not meant to imply that the sounds produced by the device are synthetic. Rather, it refers to synthesis, the process of combining the various constituent elements — in this case, the fundamental properties of sound — in a way that forms a new whole.
While we’re at it, let’s set aside another misconception: Synthesizers are not voodoo boxes that spit out music without any input. At the end of the day, a synthesizer is just another instrument that requires someone at the controls to make music.

https://www.futureworldorchestra.com