AeSonToolkit is a Max/MSP framework for aesthetic sonification. It includes objects for importing data, for formatting and synchronising realtime data, for transforming data, for mapping data to sound and musical parameters and for synthesizing sound. It requires the free FTM extensions to Max/MSP (ftm.ircam.fr).
It is developed under the ARC Discovery Project DP0773107 'Gestural Interaction with Aesthetic Sonification'.
We encourage you to download the toolkit and apply it to your data. Your feedback on the usability and aesthetic customisability of the sonification outcomes would be greatly appreciated. We are exploring the interplay of aesthetics and clarity in information sonification, especially as compared with ideas in information visualisation and infosthetics, to test its validity in the sonic domain.
GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 Copyright © 2007 Free Software Foundation, Inc. <http://fsf.org/> means that:
Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
Please consider taking 30 minutes to contribute to our online research. We would appreciate your participation in our online survey to ascertain which sound qualities listeners prefer in order to design better sonifications. We need a statistically credible group of responses in our evaluation. The results are anonymous and evaluate response to sound qualities(not musical/auditory skill or experience!)
Using Processing for the visual interaction, Max/MSP, Maxlink and FTM
The latest news is that Sam ported the user interface to Processing for enhanced appearance of the interactive objects, while using Max/MSP and FTM for sound synthesis and data handling.
Object activation states. Objects can be customised in vertical scale (pitch range and distribution), horizontal spatial location, envelope characteristics and pitches/modality for mapping; pitch, loudness and length of data-points are selected by the user with the sliders.
This diagram shows the spatial layout metaphors used for the control of sonification stream objects on the canvas (which can be a screen controlled by mouse, touchscreeen, multi-touch screen, tangible interface, Wii-mote, etc.). Objects are positioned in auditory space panned left and right or height transformed by adjusting spectral content. Level can be altered by moving the objects towards diagonally opposite corners of the canvas. All objects are manipulated using the same metaphor. Canvas position can be utilised to distinguish multivariate datastreams.
The Max/MSP signal chain shows the way in which data is stored, organised, mapped into a synthesis process, and manipulated according to the data value. This chain allows the toolkit to parse and map generic clean datasets read in from a text or spreadsheet file for real-time interaction.