introduction紐巴倫慢跑鞋 ugg台灣 óculos wayfarer vans feminino tênis mizuno wave prophecy asics noosa

research publications

music + interaction



research opportunities



AeSon toolkit




Sense-Aware Lab logo

(Past research community @ UTS)

We had a weekly research soirée at 17:30 on Wednesdays at the Sense-Aware Lab CB06.05.48. Please email me if you would like to be added to the e-announcement list for regular updates on the programme. This is an informal opportunity to present and discuss current research.

Sense-Aware Lab for Interaction Design, Interactive Data Sonification, Physical Computing, Pervasive Computing, New Musical Interfaces + Experimental Sonic Interaction, Wearable Computing and Situated Media Installation. UTS Design CB06.05.548.

Our research interests include aesthetic and interactive data sonification focusing on the key areas of eco-data and bio-data. We investigate persuasive representation and gestural, multi-touch and physical interfaces for user-centred interaction with sonification, sound and music.

Kirsty Jerry Roger Ben CareyVedad Elmar Trefz Jiann hughes Sam Claudia Nathan Wilson Aengus Martin Jon Drummond Jenni Hagedoorn Mary Mainsbridge

Kirsty Beilharz (gestural interaction, interactive sonification, new interfaces for musical expression), Jeremiah Nugroho (doctoral student, aesthetics and usability in wearable computing, cyborg technologies), Roger Mills (doctoral student, lecturer, distributed cognition and semiotics in networked musical improvisation), Ben Carey (doctoral student, interactive music performance systems), Vedad FamourZadeh (doctoral student, auditory flânerie, sonic installation, Deleuze, DeLanda, physics, Persian musical ontology), Elmar Trefz (doctoral student, experiential urbanism), Jiann hughes (doctoral student, a phenomenological approach to exploring breath through interactive artworks), Samuel Ferguson (lecturer in FEIT - formerly research assistant, audio concatenation, interactive sonification, acoustics), Claudia Alessia Calo (visiting research fellow, user experience evaluation), Nathan Wilson (composer, lecturer, music for machinima, omdern structural approaches in composition), Aengus Martin (sound artist, programmer, lecturer, sound installation collaborator), Jon Drummond (senior lecturer Sound and Music Design, first-year coordinator, electro-acoustic and electronic music composer), Jenni Hagedoorn (doctoral student, identity creation in social media, urban screens wih social media interaction), Mary Mainsbridge (doctoral student, composer, gestural interaction with interactive music systems, pianist).

Isobel hemispherical omnidirectional multichannel speaker


Polymedia Pixel MediaArchitecture Exhibition Vienna

Polymedia Pixel

Jeremiah Nugroho & Sense-Aware Lab

Multitouch multimodal data interaction sonification

Feedback Guitar

Fluid Velocity interactive installation






Windtraces (SxS 2011) & 'Interface' in Site-Specific Sound Installation


The Installation


Windtraces is a multi-channel, site-specific sound installation that was exhibited as part of the Sculpture by the Sea exhibition in Sydney in November 2011. It uses data from meteorological sensors as inputs to algorithmic processes, to generate a dynamic soundscape in real-time. Sculpture by the Sea is a large-scale art exhibition [1] that takes place each year on the coastal pathway between Bondi beach and Tamarama beach in Sydney, Australia. It is a free event and in 2011 attracted more than half a million visitors. Windtraces comprised a set of 14 loudspeakers distributed across a steep rock face, emitting sounds generated by algorithmic processes that were controlled by sensor data relating to meteorological conditions at the site. The first part of this article describes the conceptual, practical and artistic perspectives of the work in relation to large-scale musical interfaces.

Site (Location)

The Sculpture by the Sea exhibition was motivated by the scarcity of "seriously enjoyable cultural activities that are free and not fringe" and Sydney's "need for an accessible visual arts event" [2]. With respect to the aim of creating a popular and accessible public art event, the project has undoubtedly succeeded [3] evidenced by its longevity, its consistently high visitor numbers and its re-creation at other locations [4]. It was an aim of Windtraces to make the connection between weather conditions and sound intuitively understandable to visitors while allowing for great variety in sonic output.

The Bondi-Tamarama coastal path location provided an abundance of natural plinths in the rock formations nearby [2]. However, certain locations are not suitable for locating object-based art works, since they have no large flat surfaces and destructive attachments to the rock are not permitted. One such location is at the north side of Tamarama beach (see Figure 1a), a steep rock formation approximately 11 metres in height and 12.5 metres in length (see Figure 1b). This was the location of the Windtraces installation that comprised a set of 14 loudspeakers distributed across the site in the crevices and fissures of the rock.

Site Rock

Figure 1. (a. left) The location of the Windtraces and (b. right) installation site

The site for Windtraces was particularly challenging for an audio installation due to environmental noise from the sea, the wind, and from human activity. The noise varied greatly in its type (bandwidth, frequency), level and timing, in turn also significantly affected by weather conditions. Of the weather conditions in the area, the founding director of Sculpture by the Sea wrote: "On top of this physical variety is the added effect of the weather, with everything from gorgeous calm days to stormy windswept cliff tops and huge seas" [2]. Our response to the variety of weather conditions and their associated noise characteristics was to use the weather to control the sound produced by the installation. In this way, the natural weather sounds could be used to support the artificial sounds produced by the installation, rather than potentially render them inaudible.

Site-Specific Weather Conditions

In order to investigate the local weather conditions intended to control sound generation in the installation, historical weather data from locations near to Tamarama beach were studied. Data recorded during November of previous years was used for prototyping. Figure 2 shows data obtained from the Australian Bureau of Meteorology, recorded during November, 2010. There is no climatological station located at Tamarama beach itself, so data was sourced from the closest possible locations: rainfall data from the Rose Bay station and temperature data from the Wedding Cake West station.



Figure 2. (a) Daily rainfall at Rose Bay climatological station in November 2010 (0-40 mm) and (b) Daily maximum temperature at Wedding Cake West station for the same period (0-26oC)

Figure 2 only shows two of the meteorological measurements used in the installation, however it illustrates a number of features of weather data that were taken into account in designing the control of the sound output of Windtraces. Firstly, while large changes were possible from one day to the next (particularly evident in rainfall data), there may be long durations of consistent conditions. Musically, this means that individual indicators cannot be relied upon alone to generate aesthetically interesting variety. Secondly, the meteorological measurements are not necessarily representative of perceived conditions: meaning that two days with identical temperatures may appear quite different, due to the interplay of other factors, e.g. wind or relative humidity. Thirdly, weather patterns recorded in November showed great inconsistency across different years. For example, the number of days with more than 5 mm of rain in 2010 was 8, whereas in 2009 it was 2. These observations were taken into account when designing the influences of weather data on sound generation in Windtraces.

Artistic Considerations and Context

Windtraces is technically a sonification work because it preserves a strict, factual relationship between the source data and its representation through a series of mapping processes. The range, values and data trends derive directly from site-located sensors. In addition, one of the most striking features of the SxS site is its spatial structure: there was a curving, contoured overhanging rock face with reflective concave surfaces, as well as undulating horizontal axes. From its initial conception, the spatiality of the site was directly tied into the spatial rendering and perception of the work.

The idea of tracing alludes to both the spatial tracing of the rock, and also the Windtraces of ephemeral weather data. The tracing element and time-based calculations allow it to present both an audible revelation of the current state, which is the aural representation of what someone might feel at the site in the environment on their skin, as well as less obvious informative deductions (history and forecast) that rely on information about elapsed events. Part of the immediate (gestalt) and engaging understanding depends on the listener 'hearing' what they are 'feeling' as a door to hearing things beyond what can be immediately perceived in the environment. Thus, as sonification, the objective was both to make audible the invisible and to offer an informative interpretation. The real-time (live) sonification of data captured by the weather-station located at the site allowed this immediacy of representation and rapid responsiveness with the aim of making the sonification meaningful and apparent to the transient and general audience who may have no technical experience of auditory display of data, i.e. the dynamic information representation needed to be interesting and explicit to the lay public audience. In contrast, sonification is typically employed by people who have expertise in the field of visual analysis and graphing techniques or in the subject of the data being sonified (or both).

Informative Ephemera: Attention Span and Accessible Sonification

Windtraces was installed in a public thoroughfare, which had the advantage of catching many passers-by but also was characterised by an ephemeral and moving flow of pedestrians who may not have much time to stop and interpret the representation. Thus, immediacy of engagement from an artistic perspective, and intuitive or gestalt understanding of any informative attributes, were requirements. The second consideration was the result of a potentially non-expert, non-analytical audience, for whom the artistic potentials (variation, trends, dynamicism, time-of-day fluctuations, non-repetition, and an insight that the work's sound is site-specific and generated in real-time, i.e. responsive and ever-changing) were critical features of the data that the Windtraces installation to be conveyed in a brief encounter.

The infinitesimal variety of individual tastes and reactions were difficult to anticipate in such a situation, however rhythm and spatial movement were selected as two communicative and intuitive means of mapping that may be easier to assimilate than, for example, fine graduations of pitch-mapping. Accessibility or availability is important for peripheral or ambient visualisation and sonification contexts. Ambient visualisation (of which sonification or non-visual visualisation is a subset) operates on the premise that the viewer/listener should be able to catch the 'gist' of an idea and immediacy of information without full attentive and analytical thinking. Changes and trends can be observed at the periphery of our attention, which suited the beachside setting and the possibility that certain people may be positioned alongside the installation for a long period of time, while others were simply walking past. Ambient visualisations/sonifications usually have decorative qualities and lend relatively high importance to aesthetics of the design due to their being interpreted as smart furnishings, intelligent wallpapers, informative interiors, smart building façades, or in this case: audible landscaping, i.e. utilising artistic elements as well as highly functional ones.

The balance between engagement and invasiveness or annoyance is a sensitive one for public contexts. This affected the choice of the 'type' of sounds, their rhythm and location more than any kind of melodic representation. Windtraces used a variety of short, staccato sounds because they effectively convey rhythmic information, they are more easily audible in the presence of environmental sounds, and they are especially suitable for conveying movement when repeated sequentially in different loudspeakers.

The spatial movement of sound was controlled primarily by the wind-related parameters of speed and direction. For instance, the wind direction and velocity map clearly to the spatial trajectory or leftward and rightward, inland or seaward direction of distributing sounds to speakers and the speed at which onsets occur and are sequentially transmitted across the network of speakers correlates to the wind velocity being measured in real time. Hence, it was intended that passers by might 'sense' and relate the visceral and kinetic qualities of sound and wind on the skin, etc. Other sonic choices, such as the grouping of samples and their inherent clustering of key (tonality) and timbral quality of the staccato sounds are also controlled by data from other sensors, mindful of the generalist audience. For example, the choice of sounds themselves were designed to be readily accessible and identifiable, however aspects of their filtering (timbral transformation) and the periodicity – both spacing and regularity or irregularity are affected by sensed dimensions such as change of wind speed, perceived wind-speed (a formulation derived from velocity, temperature, humidity, similar to the concept of wind-chill) and more subtle dimensions like humidity, which, alone, are unlikely to be accurately palpable to human perception, however change and combination with other elements may be more readily recognised.

There was an aspect of hybridity to the sonification, i.e. the delineation of samples and spatial organisation were strictly data-driven and, in that sense, the work may be technically categorised as sonification, however some compositional control was also exercised in the aesthetic selectivity and organisation of samples into groupings from which the computer program makes its selection, similar to the hybridity of algorithmic and compositional processes found in Bulley and Jones' Variable 4 (discussed later). Therefore, although the realisation was never predictable and was always determined by nature/data, some degree of artistic intervention anticipated the likelihood of convergent sounds to be heard together and the sound design involved a degree of metaphorical or figurative connection between the sample groupings and the weather conditions they represent: e.g. turbulent 'swooshing', noisy, timbral sounds evoking windy conditions; more ambient and pitched naturalistic sounds evocative of still early morning; pitched metallic gong sounds with complex spectra defining transitions between time- and weather-states at important junctures; 'wet', droplet and somewhat literalistic sounds to depict rainy conditions. Sounds sample groupings were, however, never completely representational or naturalistic, containing a blend of metaphorical (subjective, according to the designers) and synthetic and ('contemporary') machine-sounds or 'industrial' noises interspersed with natural sounds because the installation was never intended to be purely a soundscape. Ultimately, the weather data determines the deployment of sounds: their selection and delivery in terms of location, rhythm and tempo.

Spatial Composition and Vantage Point

Post-war, Twentieth Century contemporary music has many examples of works that integrated sound design with the spatial distribution of loudspeakers in a site-specific context. Edgard Varèse's Poème Électronique was composed specifically for its first performance in the multimedia Philips Pavilion of the 1958 Brussels World Fair. The pavilion consisted of a series of hyperbolic paraboloids tensioned by steel cables. Iannis Xenakis' audio-visual work, La Légende d'Eer (1977-78) is a site-specific spatial audio work [5]. From these early but seminal examples, Windtraces has taken the inspiration of site-specific sound paths ('traces') and a pointillistic speaker distribution following curving contours, which in this case form the flowing natural sandstone rock surface of the cliff-face at Tamarama beach.

In Windtraces, loudspeakers are treated as point sources of sound in a spatial configuration rather than as an array to be audited from a singular privileged position (the 'sweetspot', as occurs in Wavefield Synthesis for example). The practical reason for this is that our audience is likely to be dispersed as well as in motion. Though Wavefield Synthesis has been used in an installation setting in this sort of scenario [6], the most robust (if somewhat restrictive) way of presenting spatial audio is by treating each loudspeaker as a point source: a sound is played from one speaker at a time. In this case, it forms a compositional element in which spatial audio functions as a gesture with two spatial attributes: position (location), and motion (the current location of a sound with respect to its previous location).

Technical Configuration

The technical set-up for Windtraces follows (see Figure 3). Local meteorological conditions are sensed using an Oregon Scientific WMR100N [7] weather station with its standard sensors and an additional solar radiation meter (Oregon Scientific UVN800). The weather station is connected to an Apple Mac Mini running three pieces of software concurrently: Weathersnoop [8] (a commercial program for collecting data from a connected weather station); Windtraces generative software (WGS) (which controls all sound generation and spatialisation in the installation); and an instance of Windtraces synthesis software (WSS) (which synthesises eight channels of audio, as directed by the WGS). A second instance of the WSS is run on the second Apple Mac Mini, connected by Ethernet. Each Mac Mini is connected to an 8-channel sound card, to output up to 16 channels of audio in total (14 were implemented in SxS). A set of three 6-channel amplifiers (Ashly Powerflex 6250) send output to a set of 14 JBL 'Control 25' weatherproof loudspeakers.

Mapping Weather Data to Sound Material

The WGS and WSS were developed using the Max interactive platform [9]: The WSS simply plays back short, recorded samples according to instructions from the WGS, related to timing, choice of sample, processing of sample and the loudspeaker from which it is played in the spatial configuration. Timing and loudspeaker choice are controlled mainly by wind-related parameters. The sample and audio processing parameters map data related to daily and instantaneous rainfall, instantaneous ultra-violet radiation intensity, local temperature, pressure and relative humidity. In order to draw concrete connections between sounds and weather conditions, perceptually informed quantities are calculated from this data, e.g. heat index [10]. This quantity is more closely related to perceived temperature than a simple temperature measurement. In addition, numerically derived, categorical representations of weather conditions ('hot and sunny', 'windy and cloudy') are used to select between different collections of sound material, ensuring that different conditions result in clearly distinct sonic results.


Figure 3. The hardware and software infrastructure for Windtraces.


In Windtraces, the movement of each sound across the rock is controlled by a finite state grammar (see, e.g. [11]). The loudspeakers are located in the crevices in the rock surface (Figure 4a). These crevices form natural contours that the spatial movement of sounds is hoped to evoke. When a sound is introduced, it is played from a specific loudspeaker and then in in linear sequence across a number of adjacent speakers, creating a movement following a path around the rock. These paths are probabilistically selected by a finite state grammar (Figure 4b). Each state corresponds to a particular speaker, and it is connected to between 1-3 other states. There is a discrete probability distribution associated with each state, which describes the probabilities of subsequent states.

spatial layout

Figure 4. (a) Speaker locations on the rock surface, and (b) A finite state machine showing the correspondence between states and speakers, and an example set of probability distributions.

Each wind direction is mapped to a set of probability distributions. For example, when there is a sea bree­ze (i.e. coming from the right hand side of the picture in Figure 4a), probabilities are configured so that sounds tend to originate from loudspeakers on the right and move leftward. Wind speed is mapped to two control variables: the interval between the time that a sound is played from a speaker, and the time that it is played from the next speaker; and the speed with which new sounds arise. A variety of different types of movement can be evoked using different time intervals and sets of probability distributions, e.g. to create clear wave-like motions from one side the other or complex scenes with many sounds following their random paths around the network of speakers.


For outdoor (especially) and public space installation, flexibility emerges as one of the essential qualities for a large-scale work, often installed over a period of days or weeks, that needs to be able to accommodate a range of conditions, responses and remain interesting for potentially multiple or sustained encounters. The aetiology of changing conditions is complex, including varying physical and environmental conditions, ambient sound levels, fluctuations in audience attendance, size, and distance as a listener. The requirement for flexibility can also stem for constraints and regulations, such as local sound limits over different times of day or, in the case of sonification, arising from the variance in data determining the rendition. Aesthetic considerations with regard to flexibility include the wide gamut of potential listeners in public for a, who may range from attentive experts to people unfamiliar with the modality, genre and style, with a wide array of musical tastes. Interactive works need to be capable of a morphing response adapting to the range of unpredictable encounters with the work (including physical conditions but also) children, adults, and varying durations of engagement with the interface. This relates to the distinction between 'interface' and 'instrument'.

Differential Attributes of Interface vs. Instrument

Contentiously, one differentiate attributes of a musical instrument as including the likelihood that its interface has sufficient nuance, subtlety (even difficulty) and refinement that a performer typically practices and aims to improve their rapport both with its musical expression and, as a means to that end, through technical fluency and familiarity performing. Musical instruments are usually (though not always) individual or personal interfaces for one-on-one interaction, therefore also often on a human-scale or smaller than installation interfaces from a physical ergonomic perspective (some large percussion instruments perhaps being exceptions), often with a high-intensity (attentive) but short-duration timeframe of engagement. Thus, we are accustomed to the performer being a relative 'expert'. By comparison a large-scale installation interface is designed for public consumption and needs to be 'playable' or listenable for experiences by relative novices: satisfying enough on a first encounter, and engaging enough with little expertise, to allow the audience to experience the music. It can be interactive giving agency for musical creation/expression to this public participant or, as in the case of Windtraces, it can be a non-interactive rendering, e.g. sonification, an automated process, or a pre-formed composition, etc. Others have looked at ingredients of successful engagement, playfulness, game-like qualities, the balance between occult or explicit procedures of operation (being able to discern or not 'how it works'), the curiosity and novelty vs. familiarity, or randomness, alienation and unpredictability vs. comprehensibility or the fathomable factor [12,13, 14]. Paine and Drummond, amongst others, have also developed ontologies and taxonomies for classifying real-time interfaces for electronic music performance [15, 16]. The (outdoor) site-specific installation can be thought of as a multi-user interface with potentially many vantage points or a dynamic, mobile public audience, which, in turn, affects technical approaches such as spatialisation. The range between passive listener and participant is a sliding scale determined by the designer: at one end, with the audience interaction essential to the actuation of the interface; and at the other, providing a passive interface, controlled by the designer or something external, such as a data-source.

Related Cases

Garth Paine's Reeds Installation

Garth Paine's Reeds work also garners weather data (in this case from weather stations located in two of the pods in the reeds) from which the 8-channel music is composed in real-time in supercollider and relayed back out to 6 pods in the pond. Paine's work was installed in the Ornamental Lake of Melbourne Royal Botanic Gardens (2000), sounding from dawn till dusk daily. A gallery version was subsequently shown at NIME03 in Montreal and at ICAD04 in Sydney. Referring chiefly to the outdoor situated installation, Paine's work confronted similar 'interface' considerations to Windtraces, such as the open-air dispersal of sound, the non-privileged position of the listener, that is, a constantly changing listening perspective but no one position that privileges all pods (speakers) equally so that spatialisation of sound is treated like a point source and the interface includes the environmental acoustics provided by ambient sounds of wildlife, wind on the lake, reeds blowing, acoustic wind-drift, and intentionally utilises diffraction off the highly reflective water surface. These same qualities that influence the way in which the work is heard, i.e. shape its interface, come from the environment that the listener would perceive (on their skin) and provide the source for the data: wind speed, wind direction, temperature, and solar radiation. In Paine's design, the application software analyses this weather data, dynamically scales it, and sends it via MIDI continuous controller to Supercollider in which he runs 6 audio synthesis algorithms [17]. Like Windtraces, Paine's Reeds installation utilises a remote controller ('mother') computer and, in this case, a wireless system of broadcast to the amplifiers and multidirectional (5-speaker) waterproof array in the lake. The agency altering the composition process here lies with the weather and the site forms the interface.

As Darren Tofts (Chair of Media and Communication, Swinburn University of Technology and author) writes on Reeds, "At a time when virtual reality designers are seeking to remove the body from immersive experience, Garth Paine's responsive, activated environments (Ghost in the Machine, Footfall, Map 1) make physical presence indispensable to the subtle invocations of the virtual. His work explores the intimate, symbiotic relationships between presence and technology, the spectacle of the body in and impacting upon space. … Reeds is Garth Paine's most elaborate development of this poetic [relationships of indeterminate outcomes], in which the organic and the technological synthesise into a hybrid ecology. Paine is interested in combining the organic and the digital in a conceptual as well as spatial sense" [17]. Thus, situated media or site-specific installation creates a convergence of interface and spatial rendering. Similar to the authors' conception of Windtraces as a (larger-than human-scale) outdoor instrument, played by meteorological activity, Tofts writes of Reeds, "Responding to environmental information, such as light intensity, wind velocity and temperature fluctuations, the reed clusters play the environment like a musical instrument, a techno Aeolian harp or photosynthesiser, that amplifies the invisible and inaudible" [17]. Arguably, the environment plays the instrument, rather than the reciprocal, and the environment forms part of the instrument: its sounding board.

James Bulley and Daniel Jones' Variable 4 Outdoor Installation

Variable 4 is an 8-speaker outdoor sound installation which translates weather conditions into musical patterns in real time. It has been installed at Elizabeth Castle – Branchage, Snape Maltings – Suffolk, Dungeness – Kent. "Using meteorological sensors connected to a custom software environment, the weather itself acts as conductor, navigating through a map of 24 specifically written movements. Every aspect of the piece, from broad harmonic progressions down to individual notes and timbres, is influenced by changes in the environment: wind speed, rainfall, solar radiation, humidity, tropospheric variance, temperature, and more. … Linking together the sensor data and scored motifs is an array of algorithmic processes drawn from the natural world, modelling phenomena such as tree growth, swarm theory and evolutionary development. The resultant composition is performed over a 24 hour duration through a field of 8 speakers integrated into the landscape" [18]. By 24-hour duration, they mean continuous, in fact of indefinite length. Installed in an open field, this implies that the audience will likely be ephemeral and transient, never hearing the total composition, with a far higher likelihood, one might suppose, of attending during certain hours of the day, etc. and the loudspeakers are "integrated in the landscape" incorporating into their acoustic environment the concomitant rustling of grasses, 'swishing' crops, noisy breeze and the natural dispersion of an outdoor environment with few hard or reflective surfaces.

Very much concerned with the process itself and a high degree of hybridity of pre-composition and algorithmic generative process, the video, however reveals the interplay of contextual environment, not only as data-source but as 'sounding board' or interface for realisation and listening. One of the videographers describes the weather as the 'composer' [18] (part-composer, realistically) in other words the agent of organisation and selection. Each movement or section has musical attributes determined by its weather state: e.g. its key signature, tempo and metre. This is very similar to the Windtraces process of selecting clusters or groupings of samples that have been coordinated in some musical way, despite that, in reality, cluster selection and subsequent rhythmic and spatial diffusion are weather-driven. The composers of Variable 4 follow a harmonic cycle that ensures a harmonious and consonant succession of sections and transition between them, as is their aesthetic goal. If the weather changes quickly causing the location to skip between distant movements, the piece enters a 'wormhole': an arrhythmic and often atonal bridge, which serves to join two unrelated musical elements [18]. These 'wormholes' are similar to the transition gong-like passages in Windtraces, transitions between sonification-driven sections, allowing a musical intervention in the fluctuating patterns of nature and algorithmic aleatory.

Variable 4 employs algorithms to encode musical behaviours, "capable of recombining existing material and generating entirely new sequences of notes" [18], serving to increase the scope of potential patterns multifold, enabling the designers/composers to respond to fine details in the meteorological conditions. The structures used by Bulley and Jones draw from mathematics, statistics and biology (e.g. a Lindenmeyer System {L-system} branching tree structure), stochastic principles and Markov Chains. They stratify layers according to sectional durations or shorter and longer proportions to enable immediacy, e.g. quick-response short segments correlating with rapid superficial changes in weather conditions. This is equivalent to the use of changes in wind direction and velocity in Windtraces, compared with dimensions such as temperature change, UV change, and humidity change that may only be observed (perhaps not palpably) over much longer periods of time. Periodicity, rate of diffusion and pulse in spatialisation in Windtraces feel like rhythm and parallel the alteration in tempo applied in Variable 4.

David Bowen's Tele-Present Water Remote-Data Gallery Installation

Worth noting briefly is David Bowen's Tele-Present Water. This is an installation that draws information from the intensity and movement of the water in a remote location, thus the "tele-" reference to data collected in the environment and relayed to the gallery installation. While the interface in this case does not include the outdoor acoustic and physical, material space, the work effectively creates a 'simulation' or partial recreation of gestures from real-world data within the context of the gallery. To this end, the audience and spatial environment still form an integral part of the physical, immersive experience of the work and its 'sounding board' as a musical instrument expressing weather agency. "Wave data was collected in real-time from a National Oceanic and Atmospheric Administration data buoy Station 46246 (49°59'7" N 145°5'20" W) on the Pacific Ocean" [19, 20]. "The wave intensity and frequency are scaled and transferred to the mechanical grid structure installed at The National Museum in Wroclaw, Poland. The result was a simulation of the physical effects caused by the movement of water from this distant location" [20]. Bowen has another related tele-sonification/simulation work, Tele-Present Wind, which uses accelerometer data from Minnesota to operate the installation in Moscow.


Windtraces installation has been presented in detail as the vehicle for discussion but importantly the keypoints it raises are issues pertinent to all site-specific sound installation and particularly outdoor, public-space installation which needs to address issues of changing conditions, ambient sound levels, and predictably unpredictable elements.

Installation interfaces require a high degree of design contingency – due to longevity of the installation, to safeguard against public damage, to withstand and express effectively in a variety of weather conditions, to provide variety and scope. In the NIME community, the "I" in NIME often focuses on instrument design (new instruments, hyper-instruments, digital instruments, augmented instruments, even virtual instruments). Hence this paper interrogates the paradigm of interface in site-specific installation, with special attention to the role of site and situated response.


[1] Sculpture by the Sea official website. Accessed online at, September 21 (2011).
[2] Handley, D. History: Sculpture by the sea. Accessed online at, September 21 (2011).
[3] Stenglin, M. Making art accessible: opening up a whole new world. In Visual Communication, 6:2 (2007), 202-13.
[4] Scarlett, K. From Bondi to Aarhus: Sculpture by the Sea. In Art Monthly Australia, No. 222, Aug (2009), 18-20.
[5] Xenakis, I. Le Legende D'Eer. France (music score): Montaigne (ed.1995).
[6] Leslie, G., Schwartz, D., Warifsel, O. and Bevilacqua, F. Wavefield synthesis for interactive sound installations. In Proceedings of the 127th AES convention, New York (2009).
[7] Oregon Scientific WMR100N Weatherstation. Accessed online at, September 21 (2011).
[8] WeatherSnoop software application. Accessed online at, September 21 (2011).
[9] Puckette, M., Zicarelli, D. et al. Max/MSP software application. San Francisco CA, Cycling 74 (1990-2010)
[10] Steadman, R.G. The Assessment of Sultriness. Part I: A Temperature-Humidity Index Based on Human Physiology and Clothing Science. In Journal of Applied Meteorology, 18:7 (1979), 861-873.
[11] Roads, C. Grammars as Representations for Music. Computer Music Journal, 3:1 (1979), 48-55.
[12] Edmonds, E. Art, Interaction and Engagement. In Candy, L. and Edmonds, E. (Ed.s) Interacting: Art, Research and the Creative Practitioner, Libri Publishing, U.K. (2011).
[13] Costello, B. Many Voices, One Project. In Candy, L. and Edmonds, E. (Ed.s) Interacting: Art, Research and the Creative Practitioner, Libri Publishing, U.K. (2011).
[14] Bilda, Z. Designing for Audience Engagement. In Candy, L. and Edmonds, E. (Ed.s) Interacting: Art, Research and the Creative Practitioner, Libri Publishing, U.K. (2011).
[15] Paine, G. and Drummond, J. Developing an Ontology of New Interfaces for Realtime Electronic Music Performance. In Proceedings of the Electroacoustic Music Studies (EMS), Buenos Aires (2009).
[16] Paine, G. and Drummond, J. TIEM Survey Report: Developing a Taxonomy of Realtime Interfaces for Electronic Music Performance. In Proceedings of the Australasian Computer Music Conference (ACMC), QUT Brisbane (2009).
[17] Garth Paine official website. Accessed online at, February 4 (2012).
[18] Bulley and Jones' Variable 4 official website. Accessed online at, February 4 (2012).
[19] Video of Bowen's Tele-Present Water. Accessed online at, February 4 (2012).
[20] Bowen's Tele-Present Water official website. Accessed online at, February 4 (2012).


Diffuse no.6








Postgrad Talk to Creative Practices & Cultural Economy: What Examiners Want




Live Weather-data Test

For Bondi Sculpture by the Sea live responsive weather-data sonification site-specific sound installation by Aengus Martin and Kirsty Beilharz. Screen-shot (right) from WeatherSnooop iOS iPhone app.





Inaugural 'diffuse' concert - UTS Bon Marche Studio

Ros Dunlop (bass clarinet, clarinet) & Jon Drummond (composer, sound artist, programmer)

Prism by Kirsty Beilharz
(bass clarinet, electroacoustic)

Breath Resonance by Jon Drummond
(bass clarinet & interactive electroacoustics)

Mare Vaporum by Jon Drummond
(electroacoustic multichannel sound diffusion)

Papua Merdeka by Martin Wesley-Smith
(clarinet, audio-visual, electroacoustic)

Season launch by Prof. Theo van Leeuwen
(Dean of the Faculty of Arts and Social Sciences) and
Assoc. Prof. Paula Hamilton
(Director of the Centre for Creative Practice and Cultural Economy research strength)RosDunlop

Ros Dunlop is one of Australia's leading clarinetists/bass clarinetists. She has been a strong advocate of new music for the clarinet & bass clarinet all her professional life. She has commissioned many Australian composers and premiered many new compositions by composers worldwide. Ros has given solo concerts throughout Australia, New Zealand, Canada, the UK, Europe, Japan, Hong Kong, East Timor and the USA. Collaborations with composer Martin Wesley-Smith took her to many of the places mentioned above on several concert tours as well. She has performed in many festivals, including The Sydney Festival, Totally Huge New Music Festival, several International Clarinet Festivals. Her CDs have received International acclaim. After the concert tour to East Timor in 2002, she began a project to recover the traditional music of East Timor through audio-visual recordings for future generations: the traditional music of East Timor is a deeply hidden culture. She is currently undertaking a Ph.D at Newcastle University on the topic of music of East Timor. Ros is a founding member of the clarinet trio Charisma, with whom she has commissioned and premiered many new works including many multimedia premieres. This ensemble explores the extensive repertoire for clarinet trio from 1900 to the present day. Ros is on staff at Sydney Conservatorium of Music, teaching clarinet.

Dr. Jon Drummond (b. 1969) is a Sydney based composer, sound artist, academic and programmer. His creative work spans the fields of instrumental music, electroacoustic, interactive, sound and new media arts. His electronic and ensemble compositions have been performed at numerous Australian and international festivals and galleries including the Adelaide Festival, the International Computer Music Conferences and the World Forum for Acoustic Ecology. He is currently a researcher at MARCS Auditory Laboratories, University of Western Sydney, Australia where his research interests include human-computer interaction design, new interfaces for musical expression, gesture analysis, improvisation, sound spatialisation and data sonification.

As a composer, Martin Wesley-Smith's main interests are computer music, audio-visual works and choral music, although he also composes chamber music, orchestral music, children's songs, music theatre, and music for film, revue etc. He's an eclectic composer at home in a diverse range of idioms. Two main themes dominate his music: the life, work and ideas of Lewis Carroll (e.g. Snark-Hunting, Songs for Snark-Hunters, and the full-length choral music theatre piece Boojum!) and the plight of the people of East Timor (e.g. Kdadalak (For the Children of Timor), VENCEREMOS!, and Welcome to the Hotel Turismo). A radiophonic version of the "audio-visual music theatre" piece Quito - about schizophrenia and East Timor - 1997 and for which he was awarded the Paul Lowin Song Cycle Composition Award, Quito has been released on CD by Tall Poppies Records. One of his pieces - For Marimba & Tape - is the most-performed piece of Australian so-called "serious art-music" (it exists in versions for other instruments, too, including For Clarinet & Tape), while several of his children's songs (e.g. I'm Walking in the City) have become classics on such television programs as Play School. Wesley-Smith was born in 1945 in Adelaide (South Australia). He studied at the Universities of Adelaide and York (England) before taking up a position lecturing in composition and electronic music at the Sydney Conservatorium of Music. In 1988 he was the Australia Council's Don Banks Fellow; in 1997 and 1998 he held an Australia Council Fellowship. In 1998, Martin Wesley-Smith was admitted as a Member (AM) in the General Division of the Order of Australia for services "to music, as a composer, scriptwriter, children's songwriter, lecturer, presenter of multi-media concerts and a member of various Australia Council boards and committees. After 26 years' teaching at the Sydney Conservatorium of Music, and finding himself increasingly disenchanted with the direction in which it was heading, Wesley-Smith left in July 2000. He is now living in Kangaroo Valley, New South Wales, attempting to supplement his meager income from composition by growing vegetables and raising ducks (both, alas, fairly futile activities, what with the predations of snails, slugs, bower birds, foxes etc. and the absence of a green thumb ...).

Kirsty Beilharz is an internationally recognised composer whose music has been performed by ensembles including Sydney, Melbourne, Tasmanian and Western Australian Symphony Orchestras, Nouvel Ensemble Moderne Montreal, Ensemble Recherche Freiburg, the Australian Chamber Orchestra, Seymour Group, and AustraLYSIS. Her music has been performed at the Gaudeamus World Music Days Amsterdam, Paris Rostrum, and Hannover Biennale. Beilharz was selected for the IRCAM Electronic Music Course, Matsumae International Science Research Fellowship (AI Lab Tokyo), an Asialink Arts Residency, Young Australian of the Year Finalist, Churchill Fellowship, Sir Charles Mackerras Prize of the British Council, and has won composition prizes in the Jean Bogan Piano Composition Award, the World Bass Clarinet Composition Competition, & Angoulême Colloque International du Basson. She completed her Ph.D in music at the University of Sydney (1996) and post-doctoral studies at the University of York. Kirsty Beilharz is Professor of Music, Sonification and Interaction Design, conjointly in the Faculty of Arts & Social Sciences and the Faculty of Design, Architecture & Building at the University of Technology, Sydney, and course director of the Bachelor of Sound and Music Design degree. Beilharz's research integrates music and generative processes applied to sound, real time audio-visual interaction and data sonification exploring gestural interaction using multimodality, physical computing, wearable technologies, hyper-instruments, and aesthetics and interactivity in sonification, with a special interest in the representation of bio-data and eco-data. She played violin for 25 years and is currently a practitioner of the Japanese shakuhachi (bamboo flute).


Kirsty Beilharz: Prism for Bass Clarinet and stereo electroacoustic spatial diffusion.
(NSW premiere)

This piece was composed for Ros Dunlop, in Sydney, 2011. The title, Prism, refers to diffusion, diffraction, refraction, spectrum, and the revealing of the inner world of the bass clarinet. Recorded bass clarinet sounds played by the performer were used in the generation and interpolation of the electroacoustic spatial diffusion. Prism widens the image of the acoustic bass clarinet physically (through the spatial image), spectrally by broadening the gamut of audible sounds, timbre and frequency range, achieved through time-stretch and transposition treatments, and synthetic processing of the recorded bass clarinet. Thus, we hear bass clarinet sounds that were previously hidden. The spatial treatment serves to augment the performer in ways not humanly possible, creating at times, through sampled clarinet at the polar extremes of the stereo image, antiphony and polyphony between the live performer and the mirrors/refractions of her sound. The live performer is the focal point, scrutinised by the musical microscope, but also burst open. The non-clarinet sounds utilise two spatio-musical approaches to gesture: locality – static, directional sounds identified by a distinct point in space; and movement – heard in the sweeping motion of sounds that dynamically move through the space. Spatial location and movement are seen as extensions of the performer's gestural vectors. Thus, mirrors or multiples (untreated clarinet sounds) are fixed at "points" in space, like other personalities, while sounds characterised by distinctive timbre and register move and morph around the performer. The optional use of grand piano with depressed sostenuto pedal to create analogue reverb is an homage to Pierre Boulez and his ground-breaking electroacoustic works like Dialogue de l'ombre double (1985, dedicated to Luciano Berio) and Domaines (1961-8, with a flexible structure) that remain cornerstones of electroacoustic clarinet repertoire today. Other inspirations include Swiss composer, Michael Jarrell's Assonance III for bass clarinet, cello and piano (1989) and Assonance II for solo bass clarinet, and Helmut Lachenmann's Allegro Sostenuto (1986-8) for clarinet trio that explores the amplified inner world of timbral nuance, extended techniques and sound envelopes. Multiphonic (chord) notations in the bass clarinet part refer to Phillip Rehfeldt's New Directions for Clarinet (New Instrumentation) (1994: revised edition, Scarecrow Press).

Jon Drummond: Breath Resonance for Bass Clarinet and interactive electroacoustics.
(Premiere performance)

The interactive electroacoustics used in Breath Resonance are created through the use of an underlying virtual model of a reed instrument. This "hybrid" virtual instrument is controlled by modifying virtual physical parameters such as tube length, tube width and breath pressure. During the performance the "real" acoustic bass clarinet sounds are analysed with respect to tone colour, volume envelopes, frequency and spectral content. These sonic gestures are then interpreted by the computer to performance parameters for sonification by the "virtual clarinet". Of course the virtual instrument doesn't have to conform to the physical constraints of the "real-world".

Jon Drummond: Mare Vaporum for electroacoustic multichannel sound diffusion.
(Premiere performance)

This is a newly composed electroacoustic work intended for live diffusion over multichannel speaker array. To create the work I have drawn on a small set of samples (a few minutes worth) I created in the early 1990s using a UNIX mini mainframe computer (VME-bus) that was housed in the Sydney University Experimental Sound Studio beneath the Seymour centre theatres. The music software running on this computer was developed by Ian Fredericks with contributions by Tony Furse (Qasar M8 synthesiser and Fairlight CMI) and Bruce Ellis. Ian's software system (Iansmuse) was a text based language for synthesis and multichannel spatialisation. Typically your sound programme would be typed up and left to run over night. The few minutes of sound generated could then be listened to the next day (if you were lucky) and the process repeated, such a contrast to working with sound some 20 years later! Although restraining myself to this small set of source material I have transformed them liberally in creating this new work. I am happy to make the original samples available if of interest.

Martin Wesley Smith: Papua Merdeka for clarinet, electroacoustics and visual media.

The 1969 UN-sanctioned "Act of Free Choice" that handed the Dutch colony West Papua to Indonesia was a sham, an act of no choice for the West Papuan people. Since then, Indonesia has treated the territory as it did East Timor, with rampant human rights abuse as well as exploitation, in collusion with America and others, of West Papua's rich natural resources. This piece is about the West Papuan people and their thirst for freedom. Almost all the sources I've used in creating it were begged, borrowed or stolen from others. They include Agence France Presse, the Australian Broadcasting Corporation's 2JJJ, Penny Beaumont, Sheila Draper, Don Bennetts, Gerry Errante, Steven Feld, Lynne Hamilton (of Prowling Tiger Press in Melbourne, who published "West Papua: Follow the Morning Star" by Ben Bohane, Jim Elmslie and Liz Thompson, an inspiring book of superb texts and photographs), David Kirkland, Jonny Lewis, Robert Lowry ("Shall We Gather at the River?"), Jonathon Mustard, SBS News, Edward Smith and Alice Wesley-Smith - my thanks to all these plus to all those whose names I don't know or contact addresses I can't find. Apologies to those whose names have been inadvertently omitted. Thanks, too, to David Bridie, Louise Byrne, Andrew Kilvert and Rob Wesley-Smith. Two other books provided valuable information: Jim Elmslie's "Irian Jaya Under the Gun" (Crawford House Publishing (Australia) Pty Ltd) and Peter King's "West Papua Since Suharto" (University of New South Wales Press). I used the beautiful West Papuan anthem Hai Tanah Ku Papua. Flags, used with permission, came from Most of the bird of paradise paintings were by Rowan Ellis (1848-1922). Finally, thanks to Ros Dunlop for commissioning the piece. And, for funding assistance, to the Music Board of the Australia Council, the Australian Government's arts funding and advisory body, to which many thanks.

Special thanks to performers Ros Dunlop & Jon Drummond for their generous participation, the Faculty of Arts and Social Sciences, Paula Hamilton & the Centre for Creative Practices and Cultural Economy & Catherine Baird, Theo van Leeuwen – Dean of the Faculty of Arts and Social Sciences, the FASS MediaLab staff – especially James Hurley, William Lawlor and Justin Harvey, the Centre for Contemporary Design Practice, Sense-Aware Lab, student volunteers from the Bachelor of Sound and Music Design degree, FASS doctoral students, Jos Mulder - lecturer in Live Sound, and Kawai pianos.



Diffuse Season 1



Polymedia Pixel Presentation by Matthias Hank Haeusler at Soirée

At Wednesday's Sense-Aware research soirée Matthias presented the Polymedia Pixel research project in the context of anamorphic, multidimensional, architectural and autonomous pixel objects that can sense, display and compute independently for the purposes of interaction and visualisation/sonification integrated into the structure of architectural spaces.

Polymedia Pixel Video Overview

By Kirsty Beilharz, Matthias Haeusler, Tom Barker & Samuel Ferguson. Responsive, sensing, autonomous architectural modules for media façades and situated media. Each 'pixel' can sense sound, proximity, light, motion; communicates with other pixels; and displays sound and light in a form of massed ambient visualization & sonification. Being capable of architectural integration, its intention is to respond to eco-data and information about the inhabitants of a space or building and its energy and climatic attributes. This research aims to embed computing in architecture.


Multimodal data interaction with multi-touch table surface

Interactive sonification using multi-touch multimodal display. The objective of this research is to develop a visual and sonic interface for interactive data enquiry on a multi-touch table surface. The table facilitates collaborative enquiry, as well as comparative and sequential analysis tasks. It is currently oriented towards time-series data interrogation. This video is made by Sam Ferguson. The concept is developed by Prof Kirsty Beilharz, Dr Sam Ferguson and Claudia Calo of UTS DAB Sense-Aware Lab.


Smartwear & Wearable Technologies ThinkTank

Advanced textiles, smart materials, sonification + interaction, health monitoring + user experience. My presentation concerns sonification and interaction in wearable technologies and smart clothing, focused on technology integration and user-centred design. Slide summary PDF



Creative Practice Doctoral Research

What: Creative Practices & Cultural Economies research strength presentation and panel discussion for HDR students & staff
When: Wednesday 27 April 2010 @ 13:00-14:30
Where: UTS CB03.02.10
Who: Prof. Theo van Leeuwen, Dr. Debra Adelaide and Prof. Kirsty Beilharz

Despite well-established Creative Practice higher degrees in programs throughout Australia, there are still many different interpretations about the role of the exegesis and its relationship to the creative work. This panel and open forum aims to canvas a range of views across different types of creative output to seek greater clarity for staff and students alike. Do we really even need an exegesis? Is it complementary to the creative work or supplementary?

The relationship between research community, thesis, creative work and research enquiry in contributing new knowledge.




Download slide summary


Situated Media Installation Exhibition

When: Thursday, 11 November 2010
Where: UTS Building 6 (DAB) & Building 3 (Bon Marche Studio) 702-730 & 755 Harris Street Ultimo
Time: 11:00 - 13:15 & 14:00 - 16:30



Polymedia Pixel Exhibition Update

In the Media Architecture Biennale, Vienna (photos by Matthias Hank Haeusler):




The finished Polymedia Pixel prototype:




Interactive Polymedia Pixel @ Media Architecture Biennale Vienna 2010

Designed by Kirsty Beilharz, M. Hank Haeusler, Sam Ferguson and Tom Barker.

Fabrication assisted by Rom Zhirnov and electronics with participation by students of Situated Media Installation Studio (UTS B. Sound and Music Design, B. Photography and situated Media)

Theme 2010: Urban Media Territories; the re-stratification of urban public spaces through digital media.

This research is an investigation into Urban Digital Media, a field that inhabits the intersection between architecture, information and culture in the arena of technology and building. It asks how contemporary requirements of public space in our everyday life, such as adaptability, new modes of communication and transformative environments that offer flexibility for future needs and uses, can be addressed by a new form of public display through the use of an interactive polymedia pixel and situated media device protocol.

The weakness of many current media façades and building-scale interactive installation environments lies in the dearth of quality creative content and its unresponsiveness by ignoring potential human factors, richness of locative situation and contextual interaction (Sauter, 2004). Media facades have matured from being 2D visual display to 3D voxel arrays for depicting static and moving images with a spatial depth dimension (Haeusler, 2009). As a consequent next step in this development, this research investigates a display that reacts empathetically to human interaction and is responsive to its urban digital media; to integrate multiple modalities; smart energy-saving; and enabling community engagement in urban digital media content, i.e. responsive and interactive sensing capability.

Seven attributes of the Polymedia Pixel that address the above-mentioned inadequacies of public displays:
(1) contextual responsiveness - to physical, environmental factors;
(2) interactive responsiveness - to human intervention and activity in the proximity;
(3) intelligence - smart controls that can adapt physical behaviour to suit conditions,
(4) multimodality - ability to communicate through non-visual channels, such as sound;
(5) sensing and communication - in order to sense/detect conditions of environment, human interaction and to be accessed by networked mobile devices;
(6) energy efficiency - optimising energy expenditure and capturing self-powering energy sources and
(7) open protocol for networked device controllers to receive communication from a wide variety of devices, enabling public access and interactive content, localized to physical context.

The following elements comprise the anatomy of a Polymedia Pixel:
(1) LED for producing the image;
(2) Speaker for transmitting sound;
(3) PV cell for energy production;
(4) Photo-sensor to react to its environment;
(5) Microprocessor to process data and information;
(6) Microphone to record sound;
(7) WiFi to transmit data wireless;
(8) Bluetooth for communicating between pixels[and external device interaction].

The prototype design was first reported in the following paper in Turkey:
'Interactive Polymedia Pixel and Protocol for Collaborative Creative Content Generation on Urban Digital Media Displays'
by M. Hank Haeusler, Kirsty Beilharz, Tom Barker at the International Conference on New Media and Interactivity 28-30 April 2010, Istanbul.

The Fabrication Process

Icosidodecahedron form 3D printed model by Matthias Hank Haeusler. A precision centred 3D print was made in order to generate mold.

Mold for casting silicone

Rom removing the delicate silicone structures while they are still somewhat malleable, not hardened. A further overnight sees them harden substantially and the material reaches full strength after about a week.

The final translucent structure, which will be trimmed. Acrylic pentagon 'windows' clip into the openings after the rigid plastic equator and electronics have been installed inside.

Hank's book also at the Vienna exhibition:
Das Buch führt in die Terminologie der Medienarchitektur ein und erläutert im ersten Teil die Geschichte der Medienfassaden anhand weltberühmter Beispiele wie Times Square, New York, oder Centre Pompidou in Paris. This book explores the terminology, recent history and developments in Media Façades, showing famous examples from Times Square to the Pompidou Centre in Paris.

Haeusler, M. Hank Media facades – History, Technology, Content, avedition, Ludwigsburg 2009.

Sauter, Joachim, "Das vierte Format; Die Fassade als mediale Haut der Architektur", Fleischmann, Monika; Renhard, Ulrike (Eds), Digitale Transformationen. Medienkunst als Schnittstelle von Kunst, Wissenschaft, Wirtschaft und Gesellschaft, (Heidelberg: whois verlags und vertriebsgesellschaft, 2004).


Emotiv Epoc for transmitting brain signals to Max/MSP with OSC

We are using the Epoch headset EEG live data feed as the source for sonification of brain activity and to try to distill useful information from the most active and dynamic brain regions, to overcome the intrinsic orthogonality of the feeds.


Here is a picture feeding into Pure Data by 'quantum.Trip' and MindYourOwnOSC, distributed on open source software. You can see the live tracking in action. (photos from EPOC website)


Charisma Ensemble 'Diamond Quills' Performances

Recording from Sydney Conservatorium of Music, performed by Ros Dunlop, Julia Ryder and David Miller of Charisma Ensemble. They also performed Diamond Quills at NIME (New Interfaces for Musical Expression) conference in Eugene Goossens Hall at the ABC Centre Sydney on 18 June. Duration: approximately 12 minutes.

This piece derives its title and the movement titles (Diamond Quills, Fidget Wheels, and Black Thumb-balls) from Kenneth Slessor's (1901-1971) poem, Five Bells (1939). This is the second time that I have visited this poem, with the previous chamber sextet, Between Five Bells written in 1996, performed by the New Music Ensemble of Sweelinck Conservatory, conducted by Harry Sparnaay at Beurs van Berlage at the Gaudeamus Festival in Amsterdam.

The entire poem occurs in the time-lapse memory "between five bells", that is the fragments of memory, reminiscence, nostalgia and atmospheric memento mori triggered by the sound of five nautical ship's bells tolling out the time. This composition is a study of time, death and the harbour (Sydney Harbour).

The narrator remembers the bobbing buoys and reflected lights coruscating on the nocturnal harbour, the sounds and images evoked by Sydney Harbour. He also describes the "diamond quills and mackerel-backs" of the glittering dancing waves. This image evokes the lively, fluid, ephemeral, translucent, brilliant and oblivious (to death) continuum of nature and the harbour, the constantly oscillating effervescent waves that is encapsulated in the elegant, filigree, flowing first movement.

I looked out my window in the dark
At waves with diamond quills and combs of light
That arched their mackerel-backs and smacked the sand

The second movement conveys the rhythmic, meticulous, almost paranoid, precise yet idiosyncratic metronome of time that the drowned, lackadaisical subject of the poem refused to be entrapped and constrained by, juxtaposing the ethereal quality of memory, a warped time that transcends clocks, momentum, humdrum and decorum. The dancing piano counterpointing the mechanistic clockwork motifs and occasional lurching melodies of the bass clarinet seek to elude temporal shackles. One perceives the character's sardonic, skeptical condescension towards conventions of temporal organisation.

Time that is moved by little fidget wheels
Is not my time, the flood that does not flow.

The third movement is a dark, sinister, intense and evocative ferocious fervour that is juxtaposed with wild and other-worldly cries and the distress of the drowning sinking man imagined by the poet, switching over from the disarray and panic of the human experience into the "longer dream" and timeless, otherworldly fluidity of the sea as the narrator relives his drowning below its dark surface and permanent disconnection. It evokes a violent, turbulent struggle, floating off into the sublime eventually, concluding with the tolling of the five bells, closing the glimpse of the memory.

The tide goes over, the waves ride over you
And let their shadows down like shining hair,
But they are Water; and the sea-pinks bend
Like lilies in your teeth, but they are Weed;
And you are only part of an Idea.
I felt the wet push its black thumb-balls in,
The night you died, I felt your eardrums crack,
And the short agony, the longer dream,
The Nothing that was neither long nor short;
But I was bound, and could not go that way,
But I was blind, and could not feel your hand.
If I could find an answer, could only find
Your meaning, or could say why you were here
Who now are gone, what purpose gave you breath
Or seized it back, might I not hear your voice?

The 'hyper-ensemble' interaction is a live Neural Oscillator (NOSC) Network generating musical perpetuation and Granular Synthesis responding to the bass clarinet sound. Manual intervention mediates the weighting, bias and speed of the NOSC network, as well as grain length, distribution of the GS and transposition relative to the source. The Neural Oscillator Network is a biological system of nodes emulating the synapses in the human brain. Neural Oscillator Networks have been employed by Alice Eldridge and others in music to achieve unpredictable systems of perpetuation. Input signals build-up until a critical threshold is reached and then transmit signal out to neighbouring neurons, creating a delayed distribution system. "The entrainment property means that networks of these modules can create material of chosen degree of density, where each part bears a global relation to the whole. This creates parallel streams of data which retain their individual identity over time, but move in relation to each other." (Eldridge, 2008). We also used IRCAM pitch tracking, Jehan's noisiness, FTM & IRCAM IMTR Real-Time Musical Interactions Libraries for Granular Synthesis in Max/MSP (Cycling 74). A polygon drawing system with network-like appearance generates video boids superimposed on harbour visual projection, connecting the network and harbour concepts.

I used the novation Novation MIDI controller for manual intervention in the GS and NOSC during performance to manipulate transposition, onset variation, grain length, grain density in the GS and bias, network tempo, itch deviation and perpetuation/stasis of the NOSC.

Excerpts from Kenneth Slessor (1994 edition) 'Slessor Selected Poems' published by Harper Collins, ISBN 9780207182983 and also available on (retrieved 22/03/10). The Selected Poems includes various other poems relating to death such as Slessor's famous Beach Burial, and to time, e.g. Out of Time.

'Collaborating with the Behaving Machine: Simple Adaptive Dynamical Systems for Generative and Interactive Music' by Alice Eldridge. Thesis submitted for the degree of D. Phil. University of Sussex February, 2008.


NIME 2010 Sydney Overview

It was our great pleasure to host the NIME++2010 conference at UTS this year: in Sydney from 15-18 June. The conference consisted of the fully peer-reviewed paper tracks, poster and demo sessions, installations, concert performance, club night performances and Keynote talks by Stelarc and Nic Collins (Hardware Hacking). The full set of photos can be seen here:

If there is a haute cuisine in hardware hacking, Nic would be the three-star Michelin chef. Dr. Nic Collins is a composer, performer and instrument builder, Professor of music at Department of Sound at the School of the Art Institute of Chicago, editor-in-chief of the Leonardo Music Journal, former artistic director of STEIM in Amsterdam, recipient of the DAAD scholarship in Berlin, and the author of the book Handmade Electronic Music - the art of hardware hacking (Routledge, now in its 2nd edition).

Interfaces designed to be expressive need to be close to the human skin. Stelarc's work is about getting under the skin (usually his own). In fact he is presently surgically constructing and stem-cell growing an ear on his arm. Stelarc is the pioneer of cyborg art. He is a performer, Chair in Performance Art at the School of Arts, Brunel University, West London, Senior Research Fellow and Visiting Artist at the MARCS Lab at the University of Western Sydney (UWS), and Honorary Professor of Art and Robotics at Carnegie Mellon University, Pittsburgh. He also has an Honorary Doctorate from Monash University in Melbourne.
Over the years Stelarc has explored and applied his body further than skin deep to research the notion of the cyborg, where the interface becomes part of the human body. In fact, for him the body has become obsolete. But rather than a cold, hard, technical cyborg, Stelarc's research through artistic expression shows a deep passion, warmth, (in)sanity, and humour.