introduction

research publications

music + interaction

bio

projects

research opportunities

teaching

 

AeSon toolkit

sense-aware

links

 

INTERACTIVE POLYMEDIA PIXEL

@ Media Architecture Biennale Vienna 2010
Designed by Kirsty Beilharz, M. Hank Haeusler, Sam Ferguson and Tom Barker.
Fabrication assisted by Rom Zhirnov and electronics with participation by students of Situated Media Installation Studio (UTS B. Sound and Music Design, B. Photography and situated Media).

This research is an investigation into Urban Digital Media, a field that inhabits the intersection between architecture, information and culture in the arena of technology and building. It asks how contemporary requirements of public space in our everyday life, such as adaptability, new modes of communication and transformative environments that offer flexibility for future needs and uses, can be addressed by a new form of public display through the use of an interactive polymedia pixel and situated media device protocol.

The prototype design was first reported in the following paper in Turkey:
'Interactive Polymedia Pixel and Protocol for Collaborative Creative Content Generation on Urban Digital Media Displays' by M. Hank Haeusler, Kirsty Beilharz, Tom Barker at the International Conference on New Media and Interactivity 28-30 April 2010, Istanbul. More info ... Video ...

Polymedia Pixel Vienna

The weakness of many current media fa├žades and building-scale interactive installation environments lies in the dearth of quality creative content and its unresponsiveness by ignoring potential human factors, richness of locative situation and contextual interaction (Sauter, 2004). Media facades have matured from being 2D visual display to 3D voxel arrays for depicting static and moving images with a spatial depth dimension (Haeusler, 2009). As a consequent next step in this development, this research investigates a display that reacts empathetically to human interaction and is responsive to its urban digital media; to integrate multiple modalities; smart energy-saving; and enabling community engagement in urban digital media content, i.e. responsive and interactive sensing capability.

Seven attributes of the Polymedia Pixel that address the above-mentioned inadequacies of public displays:
(1) contextual responsiveness - to physical, environmental factors;
(2) interactive responsiveness - to human intervention and activity in the proximity;
(3) intelligence - smart controls that can adapt physical behaviour to suit conditions,
(4) multimodality - ability to communicate through non-visual channels, such as sound;
(5) sensing and communication - in order to sense/detect conditions of environment, human interaction and to be accessed by networked mobile devices;
(6) energy efficiency - optimising energy expenditure and capturing self-powering energy sources and
(7) open protocol for networked device controllers to receive communication from a wide variety of devices, enabling public access and interactive content, localized to physical context.

 

Multimodal data interaction with multi-touch table surface

Interactive sonification using multi-touch multimodal display. The objective of this research is to develop a visual and sonic interface for interactive data enquiry on a multi-touch table surface. The table facilitates collaborative enquiry, as well as comparative and sequential analysis tasks. It is currently oriented towards time-series data interrogation. This video is made by Sam Ferguson. The concept is developed by Prof Kirsty Beilharz, Dr Sam Ferguson and Claudia Calo of UTS DAB Sense-Aware Lab.

 

Charisma Ensemble 'Diamond Quills' Performance

Recording from Sydney Conservatorium of Music, performed by Ros Dunlop, Julia Ryder and David Miller of Charisma Ensemble. They also performed Diamond Quills at NIME (New Interfaces for Musical Expression) conference in Eugene Goossens Hall at the ABC Centre Sydney on 18 June. Duration: approximately 12 minutes.

Diamond Quills max/msp patch

 

New Interfaces for Musical Expression (NIME2010++) Sydney

It was our great pleasure to host the NIME++2010 international conference at UTS this year: http://nime2010.org/ in Sydney from 15-18 June. The conference consisted of the fully peer-reviewed paper tracks, poster and demo sessions, installations, concert performance, club night performances and Keynote talks by Stelarc and Nic Collins (Hardware Hacking). The full set of photos can be seen here: http://www.flickr.com/photos/nime2010/

The conference website and edited Proceedings are here: http://nime2010.org/

 

Human DNA genetic data sonifications

In the ICAD (International Conference of Auditory Display) challenge 2009, Sam Ferguson (UTS Sense-Aware Lab) won with his DNA sonification of the human genome using note-length and harmony to represent intron and exon relationships in the gene.

 

 

'Sonic Tai Chi' INTERACTIVE INSTALLATION (BETASPACE)

Uses computer vision (video tracking in Cycling'74 Max/MSP) to capture movement data producing the visualisation and sonification. Generative Cellular Automata rules (The Game of Life) propagate particles and sonic grains in response to users' lateral motion. Body motion in one direction propogates and promotes liveliness and generativity, while motion in the opposite direction restricts and eventually stifles activity. The user's interaction also affects the promulgation and panning of the audio synthesis to elucidate the spatial relationship between gesture and display.

SonicTaiChi

Sonic Tai Chi by Jaonne Jakovich and Kirsty Beilharz (2005-2006), BetaSpace installation at Sydney Powerhouse Museum of Technology

 

Hyper-Shaku AUGMENTED INSTRUMENT

A gesture-triggered and sound-sensing hyper-instrument audio and visual augmentation of live performance in which motion (head), noisiness and loudness, pitch-tracking, and velocity are used to scale parameters in Granular Synthesis and Neural Oscillator Network (NOSC) and Evolutionary Looming generative processes. HyperShaku HyperShaku

Gestural modification of the generative processes: sound, computer vision and motion sensor input detect gestural effects. These are used to send messages and input values to generator modules. Microphone acoustic input is used to control Looming (with loudness) and the granular synthesis (with loudness and noisiness measures). These gesture attributes also send messages to the Neural Oscillator Network and visualisation. The Max/MSP Neural Oscillator Network patch is used as a stabilizing influence affected by large camera-tracked gestures. It is modelled on individual neurons: dendrites receive impulses and when the critical threshold is reached in the cell body (soma), output is sent to other nodes in the Neural Network. The 'impulses' in the musical system derive from the granular synthesis pitch output. This example uses a Neural Oscillator Network model with four synapse nodes to disperse sounds, audibly dissipating but rhythmic and energetic. Irregularity is controlled by head motion tracked through the computer vision. Transposition and pitch class arrives via the granular synthesis from pitch analysis of the acoustic shakuhachi and Looming intensity as a multiplier (transposition upward with greater intensity of gesture).

 

Foldable, flexible display

(With Andrew Vande Moere) employs multi-modal interpretations of the folding metaphor, embodying wearable visualisation + sonification as self expression. In our research, we are evaluating the effectiveness of abstract display and its social motivation. Muscle wire is used to make very subtle movements. Motion, IR, microphone (audio) sensors and micro-processor calculations gather and impart social data about the wearer and her/his context. The project aims to explore the beauty of everyday materiality and folding as a metaphor capable of embedded complex meanings in subtle ways, electronically altering the externally perceived self-expression of its wearer through parallel, real-time sensor readings. This work queries the interaction between auditory and visual display in this bi-modal scenario in which wearable visualization relates to 'wearable computing', 'smartwear' and electronic fashion technology (e-fashion) instead of focusing on sensor and signal analysis, real-time context recognition or hardware development + miniaturisation. Wearable visualization is specifically concerned with the visual and auditory communication of information to the wearer, or any people present in the wearer's physical vicinity. Hence, it has a social computing element of interaction between devices (InfraRed communication between multiple devices) and provides a representation of social awareness of proximity/sociability.

FoldableDisplay Foldable, flexible display with Andrew Vande Moere, v.1 (above) made with Monika Hoinkis, v.2 (below) with Adrian Lombard (research assistants)

Folding Folding

 

 

Mechanisms to Enable Musical Uses of Complex Sound Sources

Professor Kirsty Beilharz (mentor) with Dr Samuel Ferguson (mentoree Early Career Researcher) Faculty of Arts and Social Sciences Research Development Grant 2009. More info ...

GTR

Ukulele

Some interesting sound sources are not predictable enough to be manipulated in the manner necessary for most typical musical performances. This project attempts to use one example of an unpredictable but interesting sound source, the feedback tones produced when a guitar is placed in close proximity to an amplifier, to investigate whether electro-mechanical systems and acoustic analysis can provide a mechanism for controlling interaction with unpredictable or complex sound sources. A novel interface and electro-mechanical mechanism for interaction with complex musical instruments will be produced, to facilitate new musical and cultural outputs. This project uses Frontier Technologies to develop smart information use and promote an innovation culture and economy, delivering an innovation in new sound-creation methodology and the development of new musical instruments and interfaces: a cultural, creative and technological contribution. This directly relates to the New Interfaces for Musical Expression international conference that we wil host in 2010 (Co-chairs Beilharz & Bongers). The methodology and analysis developed in this project can be applied to broader musical and data-mining (web database) contexts and exemplifies the University's strategic goal of "research that is at the cutting edge of creativity and technology".

 

AeSoniPhone

Smart Mobile Innovation Community of Practice and Learning

Collaboration with Professor Mary-Anne Williams (Innovation and Enterprise Research Laboratory) University of Technology, Sydney; Faculty of I.T., Faculty of Business, Apple and IBM. UTS Learning and Teaching Performance Fund Grant 2009. My contribution, with Sam Ferguson, is developing touch-applications for iPhone using interactive sonification of user-generated data and looking at issues of user-centred information representation.

 

'Wireless Gamelan' Cyborg gestural interaction

Using RFID tags to control quadraphonic music performance environment - developed with Sam Ferguson and Jeremiah Nugroho (research assistants)

WirelessGamelan WirelessGamelan

 

'Sonic Kung Fu' INTERACTIVE installation

Gallery soundspace during Sydney Esquisse art festival.SonicKungFu

Sonic Kung Fu by Joanne Jakovich and Kirsty Beilharz (2005) uses colour tracking computer vision (using webcam) and Max/MSP + Jitter Pelletier's cv.jit objects to recognise gestures of a particular colour. The physical space in the camera view is divided into vertical and horizontal regions that trigger different musical responses so that the air or space can be 'played' like a musical interface or virtual instrument.

 

 

SensorCow

SensorCow Sonification

is a sensor-controlled sonification of motion using La Kitchen Kroonde Gamma wireless Radio Frequency transmission + gyroscopic, acceleromoter and binary motion sensors + Max/MSP. The contiguous data-flow provided by the calf's walking, head-shakes, eating, etc. is mapped to separate channels for each sensor and distinctive timbres to differentiate and isolate the affect of particular gestures. It creates an auditory profile of the normally visually observed actions of the animal.

 

'Emergent Energies' Sensate Spatial Installation

By Amanda Scott, Kirsty Beilharz and Andrew Vande Moere.EmergentEnergies

Emergent Energies is a socially-aware responsive Lindenmeyer tree visualisation and sonification that displays an embedded history over time, reveals the number of people, proximity, location, pace/velocity of movement, intensity of interaction in a social space. Motion information is captured using pressure mats under the carpet in a sensate lab. Colour is mapped to auditory timbre, vertices to location, line-thickness to duration. Mapping of gesture to visual and auditory display is considered here as a type of aesthetic sonification in which the contiguous data stream comes from the user's rate of movement and scope.

 

'Fluid Velocity' Interactive Installation

For physical bicycle interface, visual projection and stereo audio production in the Tin Sheds Gallery, University of Sydney used IRCAM WiSeBox (Flety 2005) WiFi transmission of data from captors located on the bicycle frame and handlebars to transform the 3D 'creature' on screen and variable filtering and panning of the electronic sound. The programming environment was Max/MSP and Jitter (Puckette & Zicarelli, 1990-2005). Pressure on the handlebars, rotation, braking and pedalling velocity affected the angularity, splay, tentacle-thickness, number of limbs and waviness of the virtual multipod 3D creature in front of the rider. It uses binary, peiso pressure, IR proximity, accelerometer and gyroscopic sensors.

FluidVelocity

FluidVelocity

 

'The Music Without' motion sonification

Using Kroonde wireless sensors and Max/MSP to transform gestural activity of the violinist into real-time collaboration/accompaniment, giving voice to the external physicality of playing music. Most cooperative automated accompaniment programs seek to follow pitch, rhythm or harmonic paradigms of the music ,wheras this work highlights the exertion of tone production and gestural attributes of performance, sometimes revealing surprising features.

MusicWithout

 

IceCaps

+90 degrees

GPS data driven composition/sonification using generative structures derived from formal topology, determined by GPS values read into Max/MSP, like 'live ice' dynamic within a determinate form. The next data-generating polar crossing is the North Pole February-March 2009. Completed crossings include Antarctica (-90 degrees) and Greenland. Plotting the GPS points in Max/MSP real-time software; Polar ice-cap crossing data will be used for future projects; GPS points plotted on the map of Greenland; CO2-neutral Greenland crossing by my cousin, Linda Beilharz, provided the GPS data for the project. Other projects include the North Pole and Antarctica.

GPS real-data aesthetic sonification

The purpose behind the geeky device is actually to feed GPS data to Max/MSP for sonification to evaluate our new interactive aesthetic sonification toolkit.
Many auditory display toolkits produce a sequence of scientific but distinctly unmusical-sounding results and we are trying to develop an on-the-fly method of reading, scaling and quantising data that can be aesthetically controlled in dimensions like timbre, modality, tempo and make meaningful interpretations of contiguous linear time-based data such as GPS information. We are using local sounds graphed to significant waypoints to synthesize a geographically unique outcome.




 

Fabrication II: The Cry of Silk

Composed for the opening of Amanda Robins' exhibition, What Lies Beneath at the Tin Sheds Gallery in Sydney, March-April 2006. Robins paints and draws highly detailed and realistic interiors of coats and garments, continuing the long tradition of visual arts interpretation through drapery. The art of drapery is concerned with layers, superimposition, veiling, concealing, embodying and appreciation of different textures and folds. These metaphors transgress sensory boundaries and apply equally well to sound design. The idea of fabrics, textiles, textures and their embedded and embodied meanings motivates the integration of collected dynamic fabrics (recorded leather, feathers, fur, silk, corduroy, zippers, velcro, canvas) interpreted through various filters and processes of computer composition. Fabrication II: The Cry of Silk is a synaesthetic and perceptual exploration of fabric sounds eliciting mental images that are normally seen and felt. The work aims to shift our consciousness to a different level of sensory perception of fabrics. Historically, the mythology of The Cry of Silk is also an interesting and inspiring one in which the voice of the finest silk being torn is said to have resembled a cry, with its sensual connotations. We seldom think of 'giving voice' to materials and cloth, yet the textures and diversity of sounds available by brushing, rubbing, tearing, rustling and caressing fabrics, provides a rich sound world capable of focusing our ear at a deep level of attention to minutiae and detail. This coincides with an introspective examination of the micro and particle sound design magnified by musical exploration and processing of sounds in close scrutiny. Subtleties and intricacies of tiny sounds are extended, augmented, transposed and amplified to illuminate the beauty and curiosity of their microstructure in a way that we may not ordinarily hear and appreciate.

Cry of Silk

 

Tasmanian Wilderness

A soundscape of sampled and processed natural sounds (no machine or urban sounds, 2005).

Paris Metro

A light-hearted quasi-retro urban soundscape and musical flanerie through a sequence of archetypal Parisian photographic images (captured in 2004-2005).

Audio CD: Thread ... Stitch ... Fray

Recorded by Sydney Mandolin Quartet (Jade 070, 1999)

Audio CD: Burning in the Heart of the Void

Performed by Nouvel Ensemble Moderne, Momtreal Quebec (Amberola 7141, 1998)

Bamboo

 

Bamboo Voice

A video-montage setting of a significantly abridged version of The White Face if the Geisha musical composition for solo shakuhachi and chamber ensemble performed by Iwamoto Yoshikazu and Ensemble Recherche Freiburg at Hannover Biennale in 2000. The images are derived from kimono fabrics, calligraphy, typical design patterns infused with natural water, urban and environmental conflicts and dualities, confrontations and tranquil meditative qualities.

 

AleatoryAleatory

Generative Composition

Beilharz, K. (2005). Integrating Computational Generative Processes in Acoustic Music Composition, in Edmonds, E., Brown, P. and Burraston, D. (Ed.s) Generative Arts Practice '05: A Creativity and Cognition Symposium, University of Technology Sydney, pp. 5-20.
Beilharz, K. (2006). Interactive Generative Processes for Designing Sound Generative Music Composition: Interactive Generative Installation Design and Responsive Spatialisation (Poster) in Gero, J.S. (Ed.) Proceedings of the Design Computing and Cognition Conference, Kluwer, in press 17/02/06.
Beilharz, K. (2004). Designing Sounds and Spaces: Interdisciplinary Rules & Proportions in Generative Stochastic Music and Architecture, Journal of Design Research, 4 (3): http://jdr.tudelft.nl/

 

 

Urban Chimes

Urban Chimes'

(Tubular Body Bells) is an urban scale virtual chime instrument that can be played by two or more users collaboratively. It is a site specific interactive sound and visual installation designed to augment the ventilation pipes that are adjacent to IRCAM on Place Igor Stravinsky, a prominent, identifying architectural feature of Renzo Piano, Richard Rogers, and the Rubins' Centre Pompidou and IRCAM. Two internet cameras capture the gestures of two or more visitors to the plaza, which are used to control generative structures of the synthesized audio display. Each pipe represents a different timbre, with pitch mapped along a vertical axis. Sounds can be generated using hand or body motions along this axis. The system is implemented using Max/MSP for the synthesized sounds and Jitter with Pelletier's Computer Vision cv.jit objects for the gesture capture, video manipulation and projection.
Jakovich, J. & Beilharz, K. (2006). "Urban Chimes: Tubular Body Bells" Outdoor Audio-Visual Installation Proposal for IRCAM, Centre Pompidou in Proceedings of New Interfaces for Musical Expression (NIME), IRCAM.

 

Sybil

'SYBIL' Information sonification TOOLKIT

Is the process of representing information using sound. This research is concerned with mapping data to appropriate auditory dimensions in real time, using spatialisation to enhance differentiation of information streams. Information sonfication links with my other research - interactive sonification, sonification pedagogy, gestural interaction, sonification of socio-spatial activity in sensate environments. This is part of the larger Key Centre of Design Computing responsive environment project in the Sentient Lab, integrating sonification, visualisation, curious and intelligent agents. The environment transforms human interaction into an adaptive, responsive space that can understand and learn about its users. This designs for ambient display together with cutting-edge sensate, mobile, and pervasive computing technologies.

 

Cuttings Urban Islands

Cuttings: Urban Islands

Book: chapter 'Sonic Islands' on sound installation and site specific audio installation and interactive media (University of Sydney Press, 2006).

 

Student work

Gesture piano

Using spatial gesture tracked by camera to activate solenoids hitting strings cross-processed live digitally. Microphones under the removed keyboard region collect real physical sounds from the frame and strings of the instrument.

GesturePiano

 

Camera-tracking FIDUCIAL markers

(Unique cvisual identifiers) on objects and reactivision interface. Moving objects on the transparent table control a spatial mixer.

Fiducial

 

Breath-controlled music

BreathController

 

Colour & shape camera tracking

(Colour and spots) on die and motion-detection by the Bluetooth Wii controller (tilt, yaw, rotation)

Wii

 

Rubik

Rubik's Studio

Sound controller Project by Daniel Gallard and Piers Gilbertson. Using colour tracking and Reactivision fiducial marker tracking of individual markers on each surface of the cube to control groups of sounds and individual timbres. This project was developed in the Interactive Sound Studio 2008 University of Sydney.

 

Wii Taiko

Project by Dani Awad, Camilo Castillo, Deon Rowe. Wii-mote controlled taiko set with rim-shots, skin regions and different sounds achieved by button combinations using 2 Wii controllers, velocity relating to hardness and tone, drums of different sizes. This project was developed in the Interactive Sound Studio 2008 University of Sydney.

 

Interactive Internet traffic visualisation

For tangible interaction with data using Reactivision fiducial marker tracking