As Nasir and Roberts mention on (Nasir & Roberts, 2007), when sonifying, from the point of view of Sonification as it has been delivered in this paper, and spatializing the sound that is created, the case can happen in which data including spatial information is used to feed a Spatialization method, causing some sort of synergy, that enhances the perception of the model.
Spatialization techniques are then perfect to display spatial data, and they tackle the problem of the difficulty of accurately perceiving the information expressed by spatial audio. During the development and testing of the present model, test subjects said to distinguish coarse movements of sound sources on space, but were unable to extract a useful description of it on more defined circumstances, like those in which the sound was moved with polar coordinates aid.
Whereas with Cartesian control of the sound sources the sound could roughly be located by the test subjects on a large area of the space, with polar coordinates, the confusion arose. This may be because of the quality of the movement, namely it being more complex, moving in two dimensions most of the times. If the sound was moving in just one dimension, i.e. it being allocated to the right of the listener a panning from front to back, this transition could easily be identified.
In the same paper by Nasir and Roberts, they address the case when non-spatial data is used to render spatial audio, saying that it “might either be hidden from the human eye or is negligible in a visualization overview”. This is one of the basic rooting on the motivation of data Sonification that is to unveil characteristics or behaviors not seen before by graphic methods, and/or by exploring the inherent attributes of sound.
Further on this chapter: