As mentioned before, there is an International Community for Auditory Display, and people around the world have been tailoring applications to sonify different types of data, from earcons and auditory icons, to seismic sensors’ Sonification. Besides, the own definition of Sonification, that later was extended to
the transformation of data relations into perceived relations in an acoustic signal for the purposes of facilitating communication or interpretation (Hermann, 2008)
is much more specific than the previous one, including the concept of relationships and mechanics or dynamics on the systems being sonified.
A certain vocabulary has been used largely since this branch of science was born, i.e. mapping some data into sound parameters. This, as stated on (Hermann, 2008), delivers a sound creation directly related to the data that is being analyzed; this data modifies the pitch, frequency, gain or other straightforward parameter of the sound generator instrument. This is called Parameter-Mapping Sonification (PMS).
A more sophisticated point of view, also discussed by Thomas Hermann, is the one in which the data that is used to sonify performs no direct creation of sound, but rather it models the sound generation engine or instrument, which produces sound when excited or activated by an arbitrary impulse or stimulus. This paradigm is called Model-Based Sonification (MBS). On it, a system is created that contains, on its acoustic characteristics, the data that is looked upon.
Audification is finally the simplest way of Sonification, in which a simple stream of data comes, or is mapped from a one-dimensional data set (Vogt, 2008). This is performed with simple relationships between sound and data, like the frequency of the vibration of the sun mapped to frequencies on the audible range.
From this point of view, the 3D-sound Sonification system proposed here can be called a MBS, as the information acquired from the spreadsheet files can be used to locate the sounds on a virtual space, and not to create the sound themselves. Hence, the location of these sounds on space will produce different acoustic textures that will be perceived by the listener.
On the other hand, the system of this thesis can also be a PMS engine, because the same files can be used to control direct sound creation parameters, such as frequency, playback speed and so on. This can also be done parallel to the system-creating paradigm of MBS so that the same data set can for instance control position in the virtual space and pitch.
As seen on (Braasch, Peters, & Valente, 2008), a variety of techniques have been tailored for each occasion in which Sonification is used. Braasch discuss the usage of chromatic scales structures being used as base to create sound, by assuming a Perceptual Sound Space (PSS), in which the parameters or data that is used creates a multidimensional space (3D in this case) that is then mapped to control characteristics of sound, such as pitch, gain and timbre.
This later approach is a more abstract one, where 3D spaces are created that represents 3 dimensions of sound and they are mapped to the 3 axes of the created space, using the data at hand. So that for instance, a chromatic space where hue, saturation and lightness are represented like a cylinder can be translated into timbre, brightness and pitch.
Sonification engines are available commercially online (ICAD, 2011) and data processing tools are known to all scientists, such as MATLAB. Overall, these engines are built to measure to deal with the special purposes of creating sound out of data. However, the lack of a trend or common practice on Sonification makes it possible to omit these premade engines and use the ultra versatile software packages around, like MAX/MSP, to create our own application of Sonification.
Although not homogeneous, the Sonification techniques and paradigms, and the project here presented, can follow certain accepted procedures, such as the layering of spatial audio building (Peters, Lossius, Schacher, Baltazar, Bascou, & Place, 2009), and the taxonomy proposed by Hermann in (Hermann, 2008), so that uniformity can emerge after some time of using this display techniques. The Sonification community is relatively new, various writings about the subject have established some ideas with respect to it, and people seem to be speaking the same language, even when there is still work to be done.
For instance, Ibrahim and Hunt (Ibrahim & Hunt, 2006), propose two new approaches to design Sonification systems. These tackle it on an abstract way focusing on the objectives of the application and the perceptual interaction of the user, with two different models, Sonification Application Model (SA model), and User Interpretation Construction Model (UIC Model). The fist one taking care about the efficiency of the system and the method to sonify, and the later on the goals proposed for the design thinking about the Human Computer Interaction (HCI), so that a productive perception of Sonification is Achieved.
…
Further on this chapter:
0. CONTENT
2.1 SONIFICATION
2.1.1 SONIFICATION DEFINITIONS AND CONCEPTS
2.2 SPATIALIZATION
2.2.1 ACOUSTICS INVOLVED IN SPATIALIZATION
2.2.1.1 COORDINATES SYSTEM
2.2.1.2 DELAY AND GAIN
2.2.1.3 REFLECTIONS
2.2.1.4 SOUND ACQUIREMENT
2.2.2 SPATIALIZATION TECHNIQUES
2.2.2.1 ViMiC
2.2.2.1.1 BASIC FUNCTIONING
2.2.2.2 JAMOMA
2.2.2.2.1 VIMIC MODULES
2.2.2.2.2 OUTPUT MODULES