Performance Blueprints: has the recording become the blueprint for the performance? Can a recording be in itself a performance?

Andy Arthurs

Queensland University of Technology




Until the start of the Nineteenth Century paper was still a luxury. Consequently it was scarce and mass production was not possible. Similarly one hundred years later the barrier-to-entry to produce recorded music was limited to those with expensive resources. While it was not until the 1950s that producing a recording became more readily available with access to tape recorders and later cassette recorders, the means of manufacture remained in the hands of a few.  However by the 1990s, thanks to digital production and later digital distribution, recorded product moved from an economy of scarcity to very quickly becoming one of abundance. The ramifications of this revolution are still being understood.


Today the storage of sound (and accompanying images) is so cheap and easy that the parallels to paper are evident. While we can’t wrap our fish and chips in yesterday’s sound recording, the value of the medium in itself is in many ways equivalent to the value of a paper brochure – often glossy and appealing but in the end disposable and easily obtained again if needed.


Every innovation spawns new uses beyond what we imagine at the time of invention. Edison intended recording to be of primary use as a dictating machine for business use. Music was lower down his list of suggestions. Similarly magnetic recording was invented primarily as an aid for defence in the Second World War. Hitler used its properties of being able to shift time and space to get recordings of his speeches played in a different time and space to where he recorded them. This was partly to fool the allied forces as to his whereabouts. He would have been somewhat surprised twenty years later to find its primary use as a creative tool for pop music.


Never has there been a more misnamed product in the Twenty First Century than “recording”. Far from it being a record of events, it has become in itself a mode of creation. Since the 1960s recording has become a music-writing tool. Mitch Murray was a writer of big hits in the early 60s such as “How do You Do It?”. Back in 1964 he advised songwriters in his book How to Write a Hit Song, [1] “a tape recorder is one of the first investments you will have to make if you want to be a serious writer. Use your tape-recorder as often as you like.”


By 1967 the use recording as a creative tool was signed sealed and delivered.  With the release of Revolver and Sergeant Pepper by the Beatles and Pet Sounds by the Beach Boys. (incidentally the top 3 in the Rolling Stone 500 Greatest Albums of All Time) the creative recording had come of age – a new artform was born.


But just as Beethoven required the services of an orchestra and a rich patron to pay for it all, so the hit acts of the 1960s required the resources of cashed-up record companies and publishers to pay for the expensive process of recording – itself a barrier to entry for most.


In 1971 I started working at AIR London studios. AIR was among a handful of recording studios that were independent of the major record companies. It was started by four producers, George Martin (Beatles, The Liverpool Sound, Live and Let Die), John Burgess (Adam Faith, Manfredd Mann, John Barry) , Ron Richards (Early Beatles, The Hollies, Spencer Davis Group), and Peter Sullivan (Tom Jones, Englebert Humperdink). It cost at least £1m back in 1970 to set up. So access for new studios was limited largely affordability.


Then things started to change. The price of gear began falling and within thirteen years home studios could buy the Fostex B16 or, even cheaper, the four-track cassette for demoing ideas.


The genie was out of the bottle and with the complete digitisation of the recording process came a revolution in how we could not only access the equipment but in the plasticity and malleability of the medium itself. This led to even more creative uses.  Before long we could not only chop up sound and reorder it, but we could instantly turn it backwards, speed up the tempo, change the pitch and filter it or manipulate it digitally in all sorts of ways.




Let us now turn to the musical score and its function. The traditional paper score was the first effective mechanism to enable a composer to place his creative work in a form that would inform another musician:


1) What the piece should sound like




2) How to play the music.


It gave classical music the edge on all other musics as it could be written down, reproduced and disseminated anywhere. It also was there to be studied, analysed, and if lucky, to become part of the great canon of western music.


The introduction of the recording of music meant that it was possible to make permanent what had previously been live and fleeting. This meant musics other than classical could be documented too. New canons were created. However what recordings could not do was show a musician how to play the music by breaking it down into a series of stand-alone events to be learnt. But what it could do was enable music to be constructed bit by bit, track by track in ways similar to the writing of parts on a paper score.


Since the 1950s bands have been able to emulate recorded hits by listening to them and dissecting them.  But once music was digitised, the way was opened for music to be depicted in many new ways depending on the context of use. So it could be seen as a traditional score, or it could be seen through a series of different lenses. An example of this would be the various edit pages in Logic beyond the score edit page. Depending on the needs of the reader, the music can be symbolically represented in the most appropriate form. These are new ways of reading music, leading to a broadening of the definition of music literacy. Music can be viewed as a list of documented events, or as a block graph of notes with very accurate lengths and volume levels. If needed, many more parameters can be depicted than in a traditional score. And the music can actually be heard through speakers as well as seen on the screen.


The digitisation unloosened the barriers between “live” and “recorded” even further. Recordings could be manipulated and in turn be used as source material for a creative new work. I first used this approach in La Bouche, a mixed media group that I founded with Philip Chambon in 1983 which used originally recorded samples as the sound source for all the music. Of course it also opened up the possibility of creative but ethically dubious practices of miming to pre-recorded samples and recordings - exemplified Milli Vanilli (1990) [2] and more recently allegedly in Brittney Spears’s Australian Concerts (Nov 2009) [3].


The DJ phenomenon is built on the use of juxtaposing short digital and analogue samples in sequence or concurrently.




Deep Blue is a recently formed orchestra that challenges all the traditional assumptions of the traditional orchestra. It is made up of amplified strings, a full palette of electronic sounds, lighting and moving images.[4] 


I am the co-producer of Deep Blue. It is small by orchestral standards but large compared to a band, and large on impact. Inevitably with 16 players some leave and need to be replaced. So it is essential that the music can be passed on efficiently without the whole orchestra needing to be there all the time to rehearse with the newcomers.


Such a hybrid group needs all the strategies it can get to impart the musical information from one to another. Added to this is the fact that members of Deep Blue on stage do not read from music (as there are no music stands) and its conductor is an in-ear click track with additional spoken information (in much the way a news reader is getting information in his ear while talking the news). The repertoire is eclectic – from electronica to rock to improvised pieces to re-envisaged classical music. Listening to Deep Blue is like listening to a very big colourful iPod shuffle. Over half the material is especially composed for the group – which again separates them from any traditional orchestra. Add to this a staged show and the intricacies are high. The aim of the presentation is that, whilst complex underneath, remains overtly casual and personal.


Deep Blue uses the program Ableton Live as the starting point for the creation of the music. This flexible program, that combines the sequencer, audio recording and live digital processor functions in an interactive live context is the perfect driver of Deep Blue’s music. It is a program that allows for change and variation but is also able to synchronise any video images, clicks or audio information.


The method of composition is typically thus:


1)   A piece is composed on Ableton Live with string parts sketched out on midi and string samples.


2)   It is then demoed on ProTools using real strings and an electronic backing.


3)   This recording is then post-produced and the end result is sculptured into an audio file that has the right shape and feel.


4)   From this the strings are finally scored using Sibelius and the whole thing is properly recorded on ProTools.


5)   This serves as the audio for rehearsal.


6)   Slowly the rehearsal file is replaced by a live file and pre-programmed elements to create a performance file that has some sequencing, some samples, some live and click tracks, video and audio instructions.


7)   The choreography is then undertaken. And the lights are plotted. With enough time and money Deep Blue would like the lighting plots to be synced in time to Ableton too.


8)   The full rehearsals are videoed and this is used for a visual choreographic/staged score.


9)   All this material is then put into the multimedia score on the computer. The main page consists of all these elements running in sync on the screen, but a certain element or elements can be focussed on if necessary.




Deep Blue is part of an Australian Research Council Linkage grant that has helped to research and develop create what is in fact a multimedia score.




Now that recordings are ubiquitous in the creation and realising of a musical performance, it is possible to utilise the recording process:


·      As a creative composing tool.


·      As a score for the performers from which to work.


·      As an embedded artefact within a performance.


·      As a record of the live event


Not all innovation is new from the ground up. Innovation is repurposing as well as inventing. New models will keep emerging. Deep Blue is one such model. This utilises a hybrid live/recorded model at all stages in its 3600 approach.   


In Deep Blue the recording of the event is a form of recycling – an ecosystem where a sound is captured digitally, incorporated into a creative recording which then serves as a sound “score” for a live performance which is in turn the sound source for further digital capturing.


A musical performance is a live event. More accurately it is a real-time event – an act of doing. And every time an act is done it is inevitably done differently. But this difference is not just the performer’s variations but more broadly differences of context – the place, the time, the audience and the mode of listening. These in turn have an effect on the performer.


There are many cries of doom regarding the future of the recorded music industry. And it is true that, as stated above, today many recordings are little more than a brochure for the “original” performance. But what recording can offer is certainly not dead. We can use its ubiquity and fluidity to make it accessible to be utilised in new ways.


The multimedia score could even be the basis of a form of interactive publishing in the future, just as paper scores were in the past. Who knows where that will take the industry and what that will do for the royalty income streams of those composers and song writers who adopt such “how to” approaches to publishing. Thomas Edison did not include that on his list of possible uses for recording.




1. Murray, Mitch, 1964 “How to Write a Hit Song”.  Pub B.Feldman, London.




3. [1],23739,26306731-5003421,00.html


4. See