Discussion

The application of the model delivered an overall good performance that does creates sound illusions on space, and works effectively and intuitively with the structure proposed. There is plenty of room for exploration of the system, so that it can be used on a wide spectrum of circumstances and similar amount of data types.

The fact that audio files, containing samples from real life sounds was a central characteristic of the installation. This is important because, as mentioned on (Saue, 2000), using everyday metaphors, related to what the sound is intending to describe, eases the comprehension of the information. That is to say that coherent stimulus are used to enhance the grasping of the model and data correlation.

At the time the model was developed and implemented, tests showed interesting points to look at for further improving and regarding to the system limitations and performance.

For a start, the accuracy of perception is a central issue to this model. It was observed that test subjects easily described the rough movement of sounds, whereas the more subtle changes on position of sources on space caused confusion and was not perceived as intended. It follows then that the system is useful to describe relative positions when using more than one sound source, because the quantitative perception of these was almost impossible. In other words, the model could express difference in positions of multiple sources separated by a large perceptual space, so that not the absolute position of these but the relative position between the spatialized sounds would be the quality to observe.

Working with the data reading is another aspect worth considering for further revision, because much improvement can be added to it. In the current system, each of the 16 Data Readers works independently one from the others. Although a single Data Reader can read CSV files to be mapped to more than one Stream Selector, creating a synchronized behavior of the mapping, it is difficult to approach the case in which two or more of these CSV files are correlated, as it is he case of the majority of scientific data sets. As seen on those scientific data banks, they often describe independent and dependent values that are to be read in parallel.

Implementing a mechanism to link several Data Readers, so that they can start or stop streaming at the same time and/or with the same reading rate, can solve this need. What is more, the linking must be abstracted as well, so that the reading rate can be shared independently of the start or stop point. In such fashion, Data Readers that need to be synched should be selected and grouped together.

An interesting thing to do would be to create data sets that would represent movement from one point in space to the other, so that the motion of sound can be designed to purpose, and create paths of moving sound that goes from one fixed sound on space to the other. This could express transmission between one sonic object and other, and then each of these sonic objects’ density, volume or intensity can vary following the rate of transmission among them.

Regarding to the physical layer, the speaker position was not explored fully; rather, the position of them was selected based on inspection, assuming that a symmetrical layout would render the best acoustic environment. Nevertheless there are still positions to be explored to improve on a perception level. A good start is to se the speakers’ wider range of Z positions, so that elevation can be listened to more clearly.  Measurements of positions and test would yield to a statistical distribution of the effectiveness of different layouts.

Finally, the patch has got a bug to be solved on the MIDI controllers’ assignment. When manipulating the system, the number of parameters can easily exceed the quantity of controllers available, so there is the need to create a method to get rid of reutilizing knobs whether it is by creating pages on the device itself or by developing a way to stop mapping parameters from hardware knobs and sliders, so that they can be reused and recycled.

An update on the discussion of this project as of June, 2013.

An OS update, as well as an upgrade from Max 5 to Max 6 caused incompatibility with most of the patches and Jamoma modules used in the project.

However, I made a simple coding of a ViMiC system, that complies with the acoustical functionalists of the original Jamoma patches, arriving to the same perception results, though dropping the SpatDIF (Spatial audio Description Interchange Format). Notice that this format is still to be considered.

0. CONTENT