It’s a technique used to render sound spatialisation. It is based on the concept of creating virtual sound environments, with virtual sonic sources listened through virtual microphones. The signal/sound arriving at these virtual microphones, with the special characteristics of them, will then be reproduced by real speakers. Those speakers are then placed on the same location as the microphones, with the same orientation.

This type of control, for computer audio processing first conceptualized by Braasch in 2005 and coded in the Pure Data Environment  (Peters, Doctoral Thesis, 2010), and further migrated into MAX/MSP with work done by Nils Peters et al  (Peters, Matthews, Braasch, & Stephen, 2008). It is based on the virtual microphones approach studied by Corey et al. and Mouchtaris et al.  (Braasch, Peters, & Valente, 2008), where a virtual space is assumed on a computer environment so that virtual microphones are placed on it, and the volume and timing difference caused by the sound source’s and microphones’ position.

The idea is to emulate the physic behavior of sound capturing by microphones on a real sonic environment in which a sound source radiates sonic energy that arrives at different microphones with several gains and delays. These sound waves at each of the microphones represent the manner in which sound is distributed on a given space, and by reproducing each of the microphones’ signals through loudspeaker, placed in the same array in which the microphones were when the sound was acquired, the perception of the sound source position is reproduced. The concept is shown in figure 6.

This is the base of the work done and used to create Wave Field Synthesis with the aid of large microphone arrays, disposed on a line or “curtain” manner; and Huygen’s Principle and the Kirchhoff-Helmholtz Integral  (Braasch, Peters, & Valente, 2008), (Corteel & Caulkins, 2004). As shown on the past figure, this setup aims to create the illusion of a given sound source as if it was captured by a set of microphones.

Figure 6. Representing the microphone-speakers transition.  (Braasch, Peters, & Valente, 2008)
Figure 6. Representing the microphone-speakers transition. (Braasch, Peters, & Valente, 2008)
Figure 6. Representing the microphone-speakers transition. (Braasch, Peters, & Valente, 2008)

An important thing about ViMiC, opposed to other Spatialization techniques is the fact that the position of the used speakers and the number of them is more flexible, unlike the requirements for other procedures in which an specific number and symmetry of speakers is vital to create the desired sonic illusion. This gives a lot of freedom to designers (creators, programmers, musicians and experimenters) to work on a variety of venues, and hardware at hand.

In fact, thinking about this flexibility and ability to migrate the system from a set of resources to another is an important part of the design of the current ViMiC application in MAX/MSP and on the project presented here. A layered structure is proposed, similar to that of the OSI network layering, which helps abstracting the different parts of the system, making it capable of being applied to a variety of circumstances and purposes  (Peters, Lossius, Schacher, Baltazar, Bascou, & Place, 2009).





Further on this chapter: