Join me for this week’s #musochat: Online Presence

I have the pleasure of hosting this week’s #musochat discussion! I’m looking forward to a lively and informative discussion, so without further ado…

This week’s topic: Online Presence

Description: The Internet has irrevocably changed the landscape of communication and the ability to share ideas (#musochat is a perfect example). The Internet and Social Media has provided new tools to allow us to connect with people we may never have been able to in the first place, furthering opportunities for artistic growth, communication, ideas, development, and our ability to “get ourselves out there.” What role does the internet, social media, and electronic communication in general play in accomplishing your musical ambitions?

Come join the discussion on Twitter this Sunday at 9pm EST/6pm PST. You can find us by searching using the hashtag #musochat (or by simply clicking on this link).

Some ground rules (courtesy of J.M. Gerraughty):

  1. If you’re responding to a specific question (i.e. — Q1), indicate it in your answer (i.e. — A1).
    • I’ve pre-scheduled all the whole number questions (Q1, Q2, etc.), but if I find an interesting subject to follow up on, I’ll insert decimal questions as necessary (Q1.1, Q1.2, etc.)
    • As host, I won’t be answering questions unless asked directly — I’ll be concentrating on hosting and keeping conversation flowing.
  2. You are not obliged to answer any questions.  Hell, you can just lurk the entire time and not say anything!
  3. If you do want to participate, make sure that you put #musochat into your tweet, so we can all see it.
  4. Keep your language civil. Disagreement is okay, flaming is not!
  5. Q n’ A will last for about an hour, but conversation can go as long as people are willing to talk.

The Measured And Unmeasured

Prosody is a term used by linguists to talk about the function of intonation, tone, stress, and rhythm in speech. It’s something we’re all very familiar with; a particular inflection can make something sound sarcastic, declarative, timid, or sensual. It’s the properties of speech that lie outside the rules of grammar that can influence a phrase’s delivery, interpretation, and meaning. It also has fun quirks like how the careful placement of an accent can alter a sentence’s implications. For example:

I didn’t steal the money…

I didn’t steal the money…

I didn’t steal the money…

I didn’t steal the money

(etc.)

The intonation of the spoken word is a natural underscore to communication. Writing music alongside (or underneath) something so familiar yet so immeasurable is a strange compositional challenge. It’s been addressed in many guises; monody, recitatives, Sprechstimme, as well as peripherally through the use of film scores (underscores), cues, incidental music, etc. Usually these uses of music are meant to support the action of the play or film and aren’t necessarily meant to be the main focus. Yet, even out of sight, one can’t really deny the effect they have on the audience’s perceptions of the action. A rich, gentle, swelling string section might give the extra flowery aroma to a romantic scene in Paris, or thumping, rhythmically-driven punctuated brass and percussion might imbue tension to a chase scene. The inverse effect is true as well; putting “Yakity Sax” over a death scene might be a distracting choice, rather than supportive (though potentially hilarious).

I must admit that writing for voice is a compositional blind spot for me. I’m always amazed at how some composers can write so effortlessly for what’s arguably the most intimate instrument there is. Then to remove the scripted music entirely and give the immeasurable inflection of speech the foreground is somehow even more intimidating. The music has to at once be out of focus yet completely supportive, present but not present, working in the background to give accent and influence to the narration while not drawing attention to itself. How does one stay in the shadows and still pull all the musical heart-strings?

There’s a long lineage of music that sought to incorporate the spoken word. It can be found in the works of Liszt, Sibelius, and Prokofiev, as well as on the in the works of Cage and Parch (and many, many others). And while I’m hesitant to even mention it, I too have been tackling this compositional challenge and have been looking to these great masters for guidance as I polish my modest contribution to this lineage. What follows is a very general look at the different uses of a speaker and live ensemble as well as some comments on the approaches undertaken by each composer.

SOME PRECEDENTS

Franz Liszt, virtuoso pianist, harmonic adventurer, ladies man, and eventual monk, was nothing if not inventive. His prolific nature allowed for a wide range of experimentation in harmony and form during his 75 years and while Der Blinde Sanger (S.350, 1875) for piano and speaker is not necessarily one of his most popular works, it’s still an interesting case-study.

Scored for solo piano and spoken voice, the text (by Alexi Tolstoy) tells the story of a prince traveling around his countryside, soaking in the beauty of nature and the music of the peasants (at least that’s what Google Translate told me. It was difficult to find an English translation.)

The music ebbs and flows underneath the text (which is written directly above the music; there’s no indication of a narrator in the score, just the words), always supporting it and not too often the focus. What’s noteworthy here is that the style of simply writing the text above the staff seems to be a common technique. Even the most intricate and detailed music notation couldn’t can’t capture the natural rhythms of speech. The music is careful to trod along in the background while injecting moments of focus and breath, contrasted by typically Lisztian moments of flare and drama. There are also moments where the piano drops out entirely to give the speaker time to recite large chunks of the text. It’s difficult to tell why Liszt chose to remove the music in these moments but if I had to guess then I’d say it’s probably for the same reasons a director might not want music during a particular scene.

Jean Sibelius’s The Lonely Ski Trail, like Liszt’s Sanger, is not one of his most well known works. It takes a similar approach to narration as Liszt; spoken text lies right above the measured staff, though the music is much more conscious of the voice in this case. The music reaches a stillness during the spoken text, being careful not to draw attention to itself.

 

Peter and the Wolf is arguably Sergei Prokofiev’s most famous piece. It’s a children’s story written by Prokofiev himself, scored for speaker and orchestra and was commissioned by Natalya Sats and the Central Children’s Theater in Moscow in 1937. Unlike the previous works, this piece was written specifically with children in mind, so the musical support is intended to be much more obvious. The bird is represented by the flute (see the flute line in the example below), the duck is represented by the oboe, the cat the bassoon, and so on. By ostensibly defining each role during different parts of the story, the music becomes an active participant. Indeed, the music begins takes on literal meaning.

Aaron Copland took a less direct musical-narrative relationship with Lincoln Portrait. Rather than use extra-musical elements and represent them in the music, Copland opted to emphasize a general sense of mood through an extensive musical exposition before introducing the narrator. Notationally it’s difficult to say how Copland treated the text since I don’t have a copy of the score myself, however the New York Philharmonic has an extensive online archive and I was able to find an interesting snippet:

 

(source)

Seems whoever constructed this score opted to cut up the text and simply tape it in. A much less elegant solution, but probably the best illustration of how different spoken word and music notation is.

SOME EXPERIMENTATION

So far we’ve looked at some pretty straightforward uses of speaker and ensemble. Each of the pieces had a more-or-less similar treatment of the speaker, but what if the music was meant as a kind of counterpoint to the music, rather than play a supporting role?

The Third Angle Ensemble recently brought out New York music critic Alex Ross to read some excerpts from his book The Rest is Noise and underscore his anecdotes with the pieces he was discussing. Among those works were pieces by Henry Cowell (Quartet Euphometric), John Cage (Imaginary Landscape No. 1), La Monte Young (whose Composition 1960 #7 was performed while audience members were getting their seats), Steve Reich (Violin Phase), and Harry Partch (Bitter Music XX).

Ross was quite at home discussing these works and elucidated upon them with lofty themes of nature, spaciousness, and the West Coast. Interestingly, most of these composers are better known for the work they did while they were living elsewhere, but they never-the-less all had ties to this lovely side of the country, and of particular interest to this essay is Parch’s work, Bitter Music XX: November 15th – Leaving Santa Barbara.

Of all the works featured on Third Angle’s season-closing concert, this was the only one with a designated part for narrator (all others were meant to supplement Ross’ narration). Partch was really interested in bringing the corporeal element back into music; that is, re-introducing the bodily, the worldly, the Earthly, back into music. He approached this in a number a ways, most famously through his invention of several new instruments that were meant to accommodate his theoretical ideas.

In Bitter Music XX, Partch doubles the narrator with piano in an attempt to capture the natural inflections of the spoken voice. He counters this with a sung hymn (performed by Third Angle’s Ron Blessinger) underneath, as if to counterpoint the bodily with the celestial. It’s a fascinating and evocative effect; it’s as though the two worlds were working in counterpoint, and for me it’s a wonderful use of the speaker and ensemble. The speaker is at once outside and inside the piece; it’s there to provide a story, as well as be a part of it.

ONWARD

This brings us today (by skipping a few decades in between Partch and the present) and as I mentioned earlier, writing for voice is a major blind-spot for me. While I wish avoid comparisons, I’ve recently completed my own piece for speaker and ensemble. It’s incredibly intimidating to be standing at the end of a long line of experimentation while dealing with something so natural, so familiar, and yet so difficult to notate. I’ve been using all the pieces I’ve been discussing as a map of the territory, and I’ve found that there’s still a lot of territory to explore. Whether my modest contribution will mean anything is still yet to be seen, but I am using the lessons I’ve learned from a this tradition of experimentation to better understand the spoken word and the music that lies within.

On My Fifth String Quartet

Many composers write their pieces as solutions to problems or questions they pose for themselves, and I’m no exception. Nearly every piece I’ve written has been an attempt at approaching some sort of dilemma I’ve conjured up from who knows where. The problem (or question) could be a kind of formal shape, a rhythmic pattern, a harmonic idea… you name it. My fifth string quartet, Outside the Eyes, written for the Third Angle New Music Ensemble, will encompass a number of ideas, questions, and challenges I’ve put forward for myself, some of which I’ve been carrying around for many years and a few I’ve developed especially for this occasion.

Every composer’s method for starting a piece is different. I sometimes begin by picking a series of numbers, and in this case this is exactly how I came up with the opening chord. This chord uses four pitches that can be represented by the numbers 1, 2, 3, and 5 (more on this below). This chord, its pitches, and the intervals between them formed the basic elements I used to construct and develop the material.

The Notation

Tempo is one of the fundamental governing forces of music. It gives us the pulse that allows us to perceive,  interpret, and measure musical time. When a tempo is given it’s usually assigned to a specific rhythmic value, such as a quarter note or a half note. Thus, when we notate rhythm we are able to group and subdivide against a designated duration. The eighth note is half the notated duration of the quarter note and is therefore twice as fast, the half note is twice as long as the quarter, and so on.

metric levelImage source

Fortunately, we’re not limited any singular notated duration to the represent the pulse; we can represent the pulse with any rhythmic value. Indeed, we’re just as free to assign the tempo to the quarter note as we are the eighth note. Creative licence, I suppose.  Because of this the notated metrical structure can lead to apparent contradictions between what we see on the page and what we hear in the air. This can be illustrated by the following example.

Two musicians (A and B respectively) have two different rhythmic parts. Both musicians have 2 bars of 4/4, both with the same tempo assigned to the same rhythmic value. Musician A’s part consists of four quarter notes per bar with an accent on beats 1 and 3. Musician B’s part consists of two half-notes per bar with an accent on beat 1 of each measure.

Ex_1

If we were to slowly accelerate the tempo to twice the original speed, B’s part will sound exactly the same as A’s prior to the acceleration.

As you can hear, Musician B’s part is now identical to Musician A’s (prior to the acceleration). While A’s part is now twice as fast as it originally was, the effect, given the placement of accents and the 2:1 ratio of rhythmic values, is that the original part sounds like a subdivided rhythm against a “new” pulse. This leads us to perceive the rhythms as if we were back to where we originally started.

If we wanted to notate both parts to represent this effect, it would look like this:

Ex_2

Both parts are notated completely differently, yet sound exactly the same. This is a phenomenon I’ve sought to utilize in this piece.

Pitch, Meter, and Form

In 1960 Howard Hanson published Harmonic Materials of Modern Music. This text would lay the foundation for a modern musical analytic method known as Pitch Class Set Theory. One of the cornerstones of this method is its use of integer notation. Integer notation replaces each note of the chromatic scale with a number, beginning on ‘C.’ ‘C’ is known as 0, ‘C#’ is 1, ‘D’ is 2, and so on circling all the way back around to ‘C.’ This gives us 0 – 11 as the integers used to represent all pitches in all octaves.

pc_table

One of the perks of this method is its ability to categorize both atonal and tonal frameworks. The intervallic principles of Pitch Class (Pc) Analysis are similar to how we approach learning scales. When we first learn about the major scale, we learn that it’s comprised of a series of half steps and whole steps. The ordering of steps for the major scale (ascending) is whole, whole, half, whole, whole, whole, half (2,2,1,2,2,2,1). This is true for all major scales in all keys. In Pc Analysis, the major scale is represented with a singular prime form. Prime form, simply put, is the common framework for a collection of pitch class sets with the same number of elements (pitches) and intevallic content put in their simplest form. It’s basically another level of abstraction similar to how we think of all major scales as a pattern of whole and half steps, only represented with numbers.

One aspect I wanted to connect isomorphically with pitch was meter. Meter is characterized by the number of beats it contains. A meter of 3/4, for example, indicates 3 beats per measure assigned to the quarter note. To bring together meter and pitch, I took the numbers from the opening chord ([1,2,3,5]) and treated them as a ratio of beat groupings to build a variety of different meters. This ratio would determine for each performer the number of beats per measure (or within a grouping of measures) for their part in relation to each other part.  Before I explain how I came up with the meters though, I should briefly explain the form.

I’ve put a lot of thought into how I could potentially create relationships within the material. So much so, that as I writing I slowly became aware of how much control I was trying to exert over the piece. Some of the techniques I’ve used have their roots in a compositional method called serialism. This method gives the composer an incredible amount of control over how the material is created. In “total” serialism every parameter (pitch, meter, timbre, rhythm, etc.) is determined by a collection (or collections) of elements (usually numerical values) arranged in rows or “aggregates.” These rows are constantly varied throughout the piece; so much so that when listening it’s often difficult to pick out any central idea as the “theme.” Constant variation is the name of the game. I’m not a serialist though, and there are quite a few moments in the piece where the material deviates away from the otherwise rigid factors that would have tried to control it. I wasn’t comfortable feeling like a control freak, so I decided to counter this by giving the freedom to construct the final version the piece to the performers. To do this, I designed a kind of aleatoric or chance-driven form to give them this freedom.

Each performer is given a set of four parts to chose from (I, II, III, or IV), and each part utilizes a different meter and notation. So, in effect, the piece can be realized in a number of different ways, and depending on which parts the performers chose the major points of arrival can take on different meanings. Each part is very different in how it’s notated and how many beats per measure are present. A consequence of this is that there is no single score to represent the piece.

As I mentioned above, each meter is related to each other by a ratio of beat groupings. In part I, the meter has 4 beats, in part II the meter has 8 beats, part III has a grouping of three, four-beat measures that combine for a total of 12 beats, and in part IV there two, ten-beat measures that combine for a total of 20 beats. These beat groupings form a ratio of 4:8:12:20, which when divided by 4 reduces to 1:2:3:5 (the opening chord). Thus, meter in my piece acts as a kind of structural counterpoint to the material it’s derived from, rather than as an independent organizing force.

——–

All of this is to say that none of it means a thing when it comes to the actual experience of the sound.

The ideas I’ve been discussing were implemented under the premise that the process of the composer is secondary to the listener’s experience of the work. The title itself refers to the idea that musical sound is completely independent of how it’s represented in notation. If, in the right context, the eighth note can sound exactly as long as the quarter note, then which notation is the “correct” representation of that duration?

Composers are inventing new notations all the time to represent the sounds they hear in their heads. But they are just that – representations, like how a STOP sign is representative of the action of stopping. We can create all sorts of signs and symbols to represent the act of stopping and communally agree upon their use, but the thing that’s common to all is the action itself, not its representation that’s encased in language or signs.

The notations used in my piece are meant to show the limits of representation in our musical language. The processes employed that create the logic of the language cannot themselves be represented in the aural experience – the aural experience is the product of the process, not the process itself.  The musical experience happens over time and in our ears, the symbolic representation is ingrained onto the page in front of our eyes.