The History of Electronic MusicThroughout history, music has been linked to technology and new inventions. The oldest bone flutes found at sites dating from the Stone Age demonstrate the high quality of both craftsmanship and acoustical knowledge. These beautiful bronze lures represent exquisite artistic expression in casting and metalwork dating from 3,000 years ago.
Organs from the Late Middle Ages (14th C.) and onwards, particularly in the Baroque period, represent technology and precision mechanics were at a high level of development, as were creative organ design solutions. Some researchers claim that the precision mechanics in organs, music boxes, street organs and watches was a precondition for the industrial revolution in the 1700s.
Music has contributed to new technology as well as taken advantage of technological developments. Creative musicians and composers are interested in finding new expressions and media, and music has always figured alongside the new electric and electronic trends of this century.
One new development concerns instruments, in the broad sense of the term; a search for new timbres and tones, something completely new and never heard before, a unique expression. This has often involved electric/electronic components that produce (generate) sound. Another trend is seen in the use of microphones and the amplification of sounds from more or less familiar traditional instruments. The electric guitar, electric pianos and organs all come to mind.
A curious instrument, the "Telharmonium," was made by Thaddeus Cahill in Massachusetts, USA. He built it based on a collection of dynamos, each of which produced its own tone. The musician could vary the tone after it was sounded and it was possible to achieve different levels of intensity. Cahill needed many heavy generators. When he was finished with one version of the instrument and was supposed to send it to an exhibition in New York in 1906, it weighed 200 tons and was a large as a locomotive. He also attempted to send music over the telephone lines, and this was so popular that the relatively simple telephone system of the times broke down.
Until 1940 there was also great interest in the movie theatre organ, which was a technically advanced form of a traditional instrument. Similar to a harmonica or an accordion, sound was created by air being forced to pass various small metal tongues, causing vibrations. In addition, the organ incorporated percussion instruments. These organs produced sounds that resembled many types of instruments, with a large range of timbral possibilities. An experienced organ player operating 3-4 manuals (rows of keys, like a piano) and foot pedals, and having mastered the art of producing goodtimbres (registeration), was quickly able to create strong and effective variations. It was more inexpensive for the theatres to hire one man than an entire orchestra to accompany the silent films. Organists also became popular by giving small concerts before and after the films, and occasionally the concerts were considered more imporant than the films. Even when "talkies" arrived on the scene, organists performed concerts in the movie theatres and often played music in addition to the movie soundtrack.
Around the time of WWI, many countries had become quite industrialized. Cars had begun to replace horse-drawn carriages, and steam ships and motor ships were gradually replacing the sailing ships. Blimps and airplanes were familiar sights in several countries, and many understood the potential of these machines as the means of transportation for the future.
Several artists had aspirations for a new music for a new world - for the population of the future. They wanted to include elements of everyday sounds - sirens, cars, trains and machines. The Italians Luigi and Antonio Russolo wrote modernistic manifestos and used noise as an untraditional artistic means in their music. The French composer Edgar Varèse also utilized noise in ordinary ensembles as a way of creating new musical expressions. Click here to listen to an excerpt of Varèse's "Ionisation."
A truly significant innvoation occured when the radio tube was invented and developed by Lee de Forest in the USA in 1907. This invention, and later developments within transistors and now microchips, have not only brought change to electronics but to music as well. The radio tube made it possible to amplify signals from radio, TV, gramophone and tape recordings. It was also possible to make oscillators (combinations of pipes, condensers, resistance capacitators and other electronic components) that served as the basis for new instruments that were to come, electrophones as they are called in systematic musicology. There were many attempts at developing new instruments, with most never reaching beyond the experimental stage. Some wished to create everyday sounds using electronic tools, while others turned away from the noise and sought new, ethereal or strange timbres that had never been heard before. Harmonics that were only theoretically possible also interested pioneers within the field.
There were no developed instruments of much importance to music, however, until the "Aetherophone" was invented by Russian Leon Theremin in St. Petersburg in 1918. The instrument was completed in 1924, patented and produced by RCA (Radio Corporation of America, an electronics company that became an important radio network and record producer) in the United States in 1928. The instrument had a special oscillator that was affected by how the musician treated certain "antennas." Later versions had keyboards or fingerboard like a cello. The instrument could play glissando, but also had settings so one could play with chromatic half-tone increments.
Quite a lot of music has been written especially for this instrument. Click here to listen to an excerpt from "Vocalise" by Sergej Rachmaninoff.
At approximately the same time, German Jörg Mager made his "Sphärophon," which builds on a principle similar to Theremin's instrument. Here, two revolving condensors were used in place of antennas. "Sphärophon II" was the name of the advanced version of the instrument, and it had two manuals and pedals. Mager's instrument received a good deal of attention when it was used to play electronic Parsifal bells at a performance in Bayreuth.
The instrument that drew the most attention from composers at that time, however, was the "Trautonium," developed by German Friedrich Trautwein in 1930. The instrument first had only one voice, and a multi-voice version followed. A non-incremental regulation of tonal volumes was possible through a kind of keyboard (like a cylinder), with switches and buttons that enabled changes in the sound quality. The instrument was patented and marketed by Telefunken in Germany, and was the first mass-produced electronic musical instrument. Although we consider it quite simple today, this represented the highest possible form of electronic means that was available and developed specifically for musical purposes at that time. Click here to listen to an excerpt from "Langsames Stück und Rondo für Trautonium" by Paul Hindemith.
The electronic instrument from the period between the wars that has survived the longest is "Ondes Martenot" (Martenot Waves). It is named after its French inventor Maurice Martenot, who developed the instrument around 1928. The instrument is built on a main oscillator with oscillators surrounding it. These generate only one tone at a time. It is played using an ordinary keyboard together with other controls, and it is simple to achieve glissando and other special effects. The instrument also has a broad register of dynamics.
The instrument has remained in use because it is used in important works by Olivier Messiaen (Turangalîla Sympohy from 1948), Arthur Honegger and André Jolivet. Click here to listen to an excerpt from "Chant d'Amour" from the "Turangalîla Symphony."
Another successful type of instrument was developed in the USA to imitate the sounds of other instruments. It was called the Hammond Organ (after the inventor Laurens Hammond) and has been in production since 1935. It is an electronic instrument, not an actual organ. It has been used in churches and movie theatres, but has been particularly popular as a musical instrument for the home because of its relatively small size and the number of sound possibilities. The instrument later found its way into dance orchestras, big bands, restaraunts and even sports arenas like ice hockey rinks. The large concert organs might have three manuals and a full pedal set like a church organ.
The sound of a Hammond Organ is generated electromagnetically by specially formed "tone wheels" that rotate in electromagnetic fields. Some of the tone wheels provide the sound's fundamental while others provide a spectrum of overtones. Some of the organs have built-in registers (timbres), but most combine predetermined registers with the freedom to vary the sounds using a set of levers.
The sound is amplified and sent through the speakers which may be located in the instrument case, in the organ bench or mounted separately. The speakers may also have a so-called "Leslie effect": some of them (most often treble speakers) rotate, which creates a special effect.
Recently, the Hammond Organ has been subject to renewed interest within certain genres of rock and jazz music.
A completely new form of music was developed at the end of the 1940s in the radio milieu in Paris. Sound could easily be recorded on gramophone records, both musical sounds and natural, everyday "concrete" sounds. These sounds could be played back quickly or slowly, forwards or backwards, a technique that was used as early as the 1920s and which we recognize from music played by some disc jockeys and in rap music today.
Pierre Schaeffer drew attention with his work "Train Ètude," in which a series of recordings from trains in various stages of locomotion - at full speed, starting and braking - are developed using a range of techniques to create a rhythmically playful piece. Together with Pierre Schaeffer, Pierre Henry also made "Symphony for a Single Person," which is a moving and expressive work. Click here to listen to an excerpt from "Train Ètude."
Right before WWII a technique was developed to record sound by transforming the microphone's impulses into electromagnetic changes that could be stored on a long steel wire. These magnetic registrations could be played back through a particular reading head and amplified as sound through speakers.
During the war a new storage medium was developed: a long, narrow and extremely thin plastic strip onto which a fine-grained metal paste was applied that could be magnetized and de-magnetized. This was further developed into our reel-, cassette- and DAT tapes.
This new media made it easier to produce "musique concrète." One could cut out tape pieces, copy tape sounds, regroup the parts, play them at different speeds as well as forwards and backwards. In addition sound could be treated by means of filters, reverb units and various modulators. In this way one could change the sound's character, and "conceal" for example, a familiar instrument.
If we play a piano tone we hear the characteristic strikes of the hammer and a tone that gradually fades away. If we play piano music backwards the tones gradually increase in strength and abruptly end as something unclear, almost like listening to an accordion. When cutting off the beginning of a tone it is often difficult to hear which instrument is playing.
Such effects were constantly used within "musique concrète." The composers made different "sound objects," which they assembled in larger wholes. They called their works "sound dramas" rather than "compositions."
In 1955-56, Stockhausen wrote "Gesang der Jünglinge," which is still one of the classics from this pioneer period. The composition was based on at the Biblical tale of three young men who were thrown into a burning oven in Babylon without a hair on their heads being burned. Stockhausen's work consisted of five tapes (later remixed for four) that were made at the Cologne studio. He combined the voice of a boy soprano, who sings the hymn in the burning oven, with electronic effects. Many of the electronic sounds were generated from this soprano voice, which was split into its component sounds and served as the basis for further treatment or transformation into electronic sound. The consonants P K T become sharp pulses, the vowels A O U Æ become closer to sine tones, S SCH CH F become noise bands and V is a voiced tone before the explosion. Click here to listen to an excerpt from "Gesang der Jünglinge."
Excerpt from score for "Kontakte"
Technicians really became part of the music scene with the advent of "musique concrète." Shortly afterwards a new group entered the scene: researchers within acoustics, electronics and information technology who developed technological tools that the composers could use. In the beginning of the 1950s, the composers chose electronic media because they wanted to make something that was different from traditional music played in the concert halls. They wanted to express something not found in ordinary music, and to control the sounds as well as the entire performance. Neither did they want to depend on musicians who would always be subject to their own technical limitations or opportunities, and who would include their own understandings of the music in the performance. The composers wanted to experiment and play with new and unheard timbres.
New technology brought with it large and complicated studios where engineers assisted the composers. These studios had a large collection of various types of electronic equipment that together formed a kind of "gigantic instrument," which produced a tape that could be played in any place where acoustics and equipment were adequate.
These "instruments" worked with a form of sound synthesis, sound produced from the "bottom up" so to speak, in contrast to musique concrète's recording of different sounds. Here one could make sounds with any fundamental frequency and harmonics as one wished. One could make different kinds of noise, such as white noise (rushing sounds like the sound of a radio on FM that is not tuned in). Using filters, one could alter the noise, creating sound surfaces, static and moving sounds.
Some of the sounds and tones were relatively simple to produce, while others demanded many hours work for merely a few seconds of sound.
One of the most important studios in the beginning of the 1950s was situated in Cologne, Germany and it is here that composer Karlheinz Stockhausen worked. He was a traditional composer who wrote modern instrumental music,but he welcomed the chance to compose electronic music and to experiment with diverse sounds, movements and forms.
In the beginning he composed pure electrophonic works, but later he combined electronic sounds from tape and simple technical effects (ring modulator) with traditional instruments and voices.
In 1957, composer Gunnar Sønstevold made the first electroacoustic music in Norway using simple technology. A mix of both "musique concrète" and electronic music, it was made at the Public Broadcasting Station for use in a piece that was produced by the Radio Theatre. In addition to tape, Sønstevold used a singer and a violinst, and the music was presented the following year as a work in its own right: "Intermezzo."
Although Sønstevold continued to use electronics in his music, Arne Nordheim and Kåre Kolberg are generally regarded as Norway's pioneers within electronic music in the late 1960s. Nordheim travelled to a studio in Warsaw in order to realize his ideas. His exclusively electronic work "Solitaire" concentrates on the pure tone itself, its shimmering and sharp qualities.(The word solitaire means a single diamond mounted in a simple setting.) In other works, Nordheim mixes electronic timbres with the instruments' and the vocalists' "live" performance.
Click here to listen to an excerpt from "The Emperor's New Tie" by Kåre Kolberg.
Click here to listen to an excerpt from "Solitaire" by Arne Nordheim.
Click here to listen to an excerpt from "The Storm" by Arne Nordheim.
The growth of microelectronics, including the development of transistors and subsequently microchips in the 60s and throughout the 70s, created new possibilities for music. Music machines appeared on the scene, relatively small instruments that could generate unique sounds. Most familiar of these is perhaps the "Moog" (named after its inventor) and the Synclavier. These had keyboards like an organ or a piano, and resembled the first commercial "synthesizers."
The first and most famous of these was Yamaha's "DX7." This was based on a unique sound synthesis called FM-synthesis, which was developed by John Chowning at Stanford University in California. The synth had a number of presets, which were all named after familiar instruments or sounds. However, with a little initiative one could also program sounds. The instrument could be programmed so that the keys not only played a tempered scale, but also various microtonal tunings.
This instrument and others like it created a new situation for electroacoustic music within "classical" modern music as well as jazz, rock and pop.
It was not long before the "sampler" arrived on the scene, resembling the synthesizer in appearance, but with sounds that are recorded through a microphone and modified for being stored and played back. This technique makes it possible to "sample" the breaking of a glass, for example, where the sound may be used "chromatically" by playing the keys.
These instruments were like small computers, and they became even more popular when it was discovered that several could be used together or be controlled by a single computer. In order to exchange information between them, a communication system was developed that became a standard the world over: MIDI (Musical Instrument Digital Interface).
Samplers became increasingly simple to use, and many programs were written to control the instruments, to edit and store sounds, for notation purposes and for sound recording. It did not take very long to discover how to use the different kinds of "personal" computers on the market as real instruments, combining functions such as recording, editing, synthesis and playback of both one's own material and other recordings. CD production was also possible. This led to a kind of "democratization" of music, as tools and production equipment became accessible on a broader scale.
Technicians and composers with an interest in technology are regarded as having first led the way in this revolution, resulting in music that was recognized more for its technological prowness than its aesthetic/artistic qualities.
But "home studios" can not replace the larger studios where experimentation and research takes place. The best equipment is still too expensive for most composers, and thus large studios are run by institutions and organizations. Some of the most well-known are IRCAM in Paris, Banff Art Center in Canada, CCRMA at Stanford University in California, MIT Media Lab at Massachusetts Institute of Technology in Boston, and the studios at the University of California in San Diego, Columbia University and Princeton University, all in the United States.
NOTAM is the large electroacoustic music studio in Norway; a physical center with advanced equipment as well as a network of composers, academicians and engineers. This collaboration is to ensure that we follow developments abroad as well as create new tools and software, compose and perform new electroacoustic music. NOTAM is involved in a number of new cross-disciplinary art forms involving music, image, video, dance, performance, drama and the Internet.
Composers still have many unheard stories to tell, and new tools will continue to be developed in order to make this possible.
|
|
|