iPad orchestra: Acoustic versus programmatic performance

What is the difference between an acoustic performance and programmatic performance?

Without getting too hung up by the words (I have no better vocabulary here), I’m referring to whether a musical performance is actually performed by human in real time, or parts if not all of the performance has been recorded. In a 100% acoustic performance, you have groups like an orchestra or even a 4 piece band, because even though the band is plugged in and that their sound is electronically varied and mastered, the exact sound is still controlled by the human performer. In a 100% programmatic performance a music is generated by code, prerecording of an acoustics performance etc. Your CD player performs to you programmatically every time.

In between there’s a large spectrum of possible ways to “program” music. For example, recorded music being played back during an acoustical performance is in a way “pre-programmed” (such as Biomusic) even though a human being will be required to start it. Moving on into the realm of Electroacoustic music (you don’t say?) loops (e.g. Tape Loops) are commonly used as a base (an Ostinato / riff / repetitive rhythm) for human players to perform more interesting music over it. Building on this concept you can see that more than 1 loop can be involved. As the number of loops goes up, the number players required to activate i.e. play them goes down, until everything is already planned before the performance, such as a disc jockey who’s basically piecing together recorded music with their signature turntable scratching, or crazy programmers writing frequency generators to create sound in their computer sound cards (well it’s more sophisticated than that these days, but we’ll leave it as that…)

I wrote this because someone asked me about one of the works that I was commissioned to do: have a group of symphonic band members perform on iOS devices “live” to a concert hall of 1600 audience. When I started the project, I thought hard and long about whether I should craft the perfect sound, which then entails pre-programming all the sound required (and only having the players activate them), or taking advantage of the musical knowledge of a polytechnic band member who can already do wonders with their orchestral instruments.

Some pointed me to this Korean girl (watch only if you want your jaw to drop)

[kml_flashembed movie=”http://www.youtube.com/v/nzh2UygPwDU” width=”400″ height=”300″ /]

Behind the jaw-drop-ness is basically the pre-programming of a couple of loops that can be switched in real time while the human focused on singing (acoustically). Unfortunately (sorry), most of these mode of performance is really badly done, or rather, it’s the bad ones that gets promoted to umpteen million views.

Here’s one that’s more acoustic by Northpoint ‘iBand’ – nothing is preprogrammed. They might have had 1 loop or a metronome to keep people together, but that’s about it. I leaned towards this model because there’s a lot of honest and authentic sound production from the performer.

[kml_flashembed movie=”http://www.youtube.com/v/DcexJQM-8W0″ width=”400″ height=”300″ /]

Going back to a more philosophical level of what it means to “play” an iPad/iPhone as a musical instrument – Many has step forward to join this creation process of crafting instruments out of this advanced calculator and multimedia slab. Too often, in the midst of proving their prowess, developers forgot that making an instrument means extending the human ability to communicate with another (or with an audience) with a global language (music). All the knobs and extensible sound, all the GUI for easy access, and all the $$$ one would charge for the app, are secondary to how the listener appreciates the resulting sound + visual presentation of the performer as a whole.

bebotWhich brings be to Bebot – so far still my favourite app for live performance. It’s not the easiest instrument to play, but it puts a lot of power in the performer, besides appearing amiable and professional at the same time. It has hard to reach “knobs” for total control, but depending on what you’re performing you can pretty much configure it the exact way you need before getting on stage.

Some reviewers on the app store has commented that the app lacks integration, i.e. midi output / copy and paste the audio etc. This are all examples of requests by arrangers trying to piece together a programmed music. There’s a good reason to do so, given that most of our distribution channels for music today only distribute recorded music. However, I really hope that more instruments can be crafted for acoustic playing, with the focus on presentation, sound quality, human interface and other perspective that ancient instrument makers will approve.

As instruments evolves from sackbut to trombone to iBone, let’s hope musicians will continue to find new ways of mastering new instruments to perform new works to extend the musical vocabulary of the human race.

My latest project will be premiered at SP Band’s Musical Delights XXXV – details: arts.cca.sg/2012/01/10/musical-delights-xxxv

Print Friendly, PDF & Email

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to Top