Portrait

Karlheinz Essl

Improvisation with Computers

Answers to a Questionaire by Thomas Peter

1 Jan 2006
With additions of 2011


Improvisation of Karlheinz Essl on his realtime composition environment m@ze°2,
performed live at his portrait concert at the Projects Art Centre in Dublin on Oct 21st, 2011.
In this performance, Essl uses a variety of MIDI controllers and a video camera for gestural control.



Software

What software do you use and why did you choose it?
For musical improvisation with computers, I solely employ my meta-instrument m@ze°2 which I have programmed in Max/MSP. As I consider improvisation as a very personal endeavour it is absolutely necessary to construct and develop my own instrument, perfectly tailored towards my idiosyncratic musical visions.


screen shot of m@ze°2

User interface of Karlheinz Essl's computer instrument m@ze°2
version 2010


As a programming language for constructing my instrument I have chosen Max/MSP for it allows:
  1. to build up a modular composition/improvisation environment in a bottom-up style
  2. to be independent of software written by others
  3. to express and realize my personal ideas
  4. to apply changes and modifications in an easy way


Do you always use the same software setup or do you adapt it to the performing situation?
I constantly extend and improve my instrument due to the requirements of the given improvisational context, the involved musicians and - not to forget - the space and its room acoustics. Which means that I am constantly modifying and tweaking my software.


Hardware

What hardware do you use to control the software and why did you choose it?
I use a set of three MIDI controllers - each one fulfills a specific task:
  1. one fader box (with 16 sliders) for mixing the audio streams of the various structure generators that I have implemented in my instrument
  2. a MIDI controller with 16 rotary knobs to control different effect processors, spatialisation algorithms, reverb units and compressors
  3. another fader box (with 16 sliders) solely devoted to control harmonic and pitch structures of the above mentioned "structure generators"
In fact, I do not use a MIDI keyboard for I try to avoid playing patterns that I have in my fingers. Instead, I play on the keys of my computer keyboard where I can evoke specific sound structures or change parameter settings. And I also utilize the computer mouse for various task - for instance as a two-dimensional gesture controller.

Addition 2011: Besides those MIDI controllers, I am nowadays also using the built-in web camera of my laptop for gestural control of granular synthesis. Furthermore, a foot pedal is also engaged for controlling certain sound effects like filters etc.


Karlheinz Essl playing the m@ze°2 with gestures


Which purpose – according to your opinion – is the hardware to serve?

The MIDI controllers are utilized to control the various parameters of the compositional algorithms which generates the sound stream in realtime. As this is comprised of several nested and superimpossed layers, the control is always "polyphonic". This, however, requires a speficic playing technique which enables one to move several sliders and knobs independently at the same time - always controlled by the ear.

   


Is control of the sound by gesture important to you?

Although I do not use real gestural controllers, the gestural aspect is very important to me. It makes a big difference whether your body is involved or not. This is the reason why I prefer to stand while playing for it provides me a much better physical tension that makes me more reactive and alert.

Addition 2011: Since 2009, I also included gestural control using the web cam of my laptop for controlling granular synthesis (see above).



Computer as a musical instrument

Why did you choose the computer as an instrument for improvisation?
After playing piano, e-guitar and double bass since my childhood I arrived to the point where I became more and more interested in composition, and I abandoned my instrumental activities. After spending many years as a composer in splendid isolation I came to the point where I longed to perform as a musician again. The idea of returning to the double bass which I played for many years seems absurd as this instrument appears too restricted to me. I felt the need to invent a highly personal instrument which allows me to express myself - not only as an instrumentalist, but also as a composer. The vision I had in mind (blurry at first) was a hybrid between a composition environment and a musical instrument, where the historical distinctions between composition, performance and instrument coincide.

This was the time when I came to IRCAM in 1992 to write Entsagung - a commissioned piece for chamber ensemble and electronics. There I came in contact with Max which provided an environment to generate music in realtime by the use of algorithms. Having worked in this field several years before by using computer-aided composition for score synthesis, I realized the power of a realtime system which immediately supplies one with the sonic results which - in turn - could be manipulated on the fly. By this, the software turned into and instrument which actually can be played.


Karlheinz Essl improvising on his computer instrument m@ze°
Campus Musick Klagenfurt, 15 Jan 2009


In your opinion, what are the largest problems with the use of the computer as an instrument for improvisation?

The lack of immediacy and the non-intuitive approach. It takes a lot of energy to invent methods to overcome these obstacles... I still envy a singer for the directness of his voice which is not mediated by external means, but on the other hand I also regret him for being so much depending on his momentarily disposition. Maybe that's the reason why I recently started to included my own voice (using a head-mounted microphone) into my m@ze°2-based improvisation environment.


OUT OF THE BLUE
free improvisation duo with Agnes Heginger (vocals) and Karlheinz Essl (live-electronics)



Performing practice

What kind of sound material do you use?
I exclusively use "concrete" sound material which I obtain from various sources: studio recordings of instrumentalists, field recordings, excerpts from own compositions, objet trouvées, sound snippets found in anonymous sources etc.


Do you use the computer as an independent sound generator or as a processor of external sound sources?

In the context of improvisation, I reject using a computer for processing external sound sources which I consider as an act of aggression against the instrumentalist. Therefore I solely engage the computer as an independent sound generator.


What experiences have you made performing together with other instrumentalists (acoustic and electronic)?

Meeting the first time during an improv act (what I like to call a "Blind Gig") can result into magic moments. Tension and alertness paired with an open sense might yield fantastic results - regardless whether instrumentalists or electronic musicians are involved. - However, this needs preparation: Instead of reahearsals before the concert (apart from doing an ample sound check) I prepare myself by getting to know the person: by studying his/her music, listening to concerts and/or recording, but also by going for a walk together, discussing and dining.


Blind Gig with Heinrich v. Kalnein and Karlheinz Essl
Stockwerk Graz, 16 june 2011


Is there a difference in the interaction with electronic instruments and acoustic instruments?

It often appears that players of acoustic instruments are more reactive. In general, the reaction rate of electronic musicians seems much slower. However, there are exceptions: think of FURT (Richard Barrett and Paul Obermayer).


How far-reaching should the control of the software and parameters during performance be?

It would block the musical flow if one has to think too much about the underlying structural parameters. On the other hand, however, it is absolutely necessary to consider them constantly. But how can we escape from this dilemma? - I found it helpful to create little "agents" - autonomous software modules which are utilizing compositional strategies in creating permanent variations of the generated sound which always appears vivid and fluid. On the top level, the activity of such an "agent" can be controlled by just one parameter which determines its degree of autonomy.


How do you think about the relationship between programming and music making, which part is more important or takes more time?

The relationship between programming (which means improving the instrument and its compositional algorithms) and the actual playing is constantly shifting: there are periods where I stick to the implementation of new ideas, and phases where I just play my instrument. Both aspects are mutually depending and must not be played off against each other.


What are the strengths of your instrument, what else should it – in your opinion – be able to do?

The strength of m@ze°2 lies in the conjunction of generative composition algorithms with a comprehensible user interface and an elaborated control facility. However, the best instrument can produce horrible sounds if it is not played properly. It needs permanent practice to train the ear and the fingers. The best results are achieved when it can be played intuitively - and when it is not required to stare at the computer screen anymore.


Definition

How would you personally define "improvisation"?
As a composer interested in improvisation, let me try to distinguish between both aspects. The significant difference between composition and improvisation is their different dealing with time. In composition you are acting out of time, abstracting therefrom. You simulate a time structure that - converted into reality - lets the actual piece come into being. But improvisation is completely different: you are inside time in the moment, form time and its passing in "real time"; and now, within this mercilessly passing time frame you have to follow a certain way which maybe you've thought about before or which turns out to be negotiable during the improvisation, and at the same time be continuously conscious of references: what's happened before, how can I go on working with this. This is no senseless continuous going forward, but the intentional visiting of previously existing conditions in order to further transform them. Again and again building bridges to the past. I think this is very important: that time goes forward, but in improvisation again and again one tries consciously to recur to that which was, so as to create points of reference.



Home Works Sounds Bibliography Concerts


Updated: 23 Feb 2021

Web
Analytics