Almost ready with programming structure!

The Max Patches’ interfaces are almost ready and working with MIDI controllers today!

To create the soundscape and illusions I’ve been writing about, I will be using Max/MSP as main platform of sound programming. It allows a powerful manipulation of sound and signal processing as well as interfacing. I am also using some external “plug ins” to Max, created by researchers interested on sound spatialization. This plug ins are from a set of modules called Jamoma [1], and they do the core processing to use the Virtual Microphone Control rendering.

The idea is to bring sound files, from now on referred to as sound sources or just sources, and locate them somewhere on the space where the speakers are set. The illusion of these sources actually being moving across the listening audio will be accomplished by the usage of the ViMiC technique, that will distribute the sound of this sources on the speakers provided accordingly. This sources will be able to move with the aid of external MIDI controllers and this movement can also be automated via programs developed on Max/MSP.

The main interface is divided into three parts, working with the ViMiC processing itself, the sound sources’ position and the sound source’s descriptions and parameters.

Some other sub patches are working as well with the main one, adding automatic movement to the selected source, and changing maximum and minimum values of the dials used across the programs.  The structure of the Max patching follows the stratified approach for sound spatialization proposed by Nils Peters et al. on [2]. This structure is shown on the next figure.

 

 

 

 

 

 

  Figure 1. Structure of the soundscape application so far.

On it, the design of the system can be layered on matching functionalities, such as Authoring, Scene Description, Encoding/Decoding and Hardware Abstraction. This stratification helps future interoperability with other spatialization applications.

The Authoring Layer, encompassing the Scene Description and Hardware Abstraction Layers, is defined as the software developments done to approach the end user; in other words, the interface through which the system is managed.

Layer 5 covers all definitions regarding the virtual space and sound sources created on the spatialization system. Layer 4 and 3 are defined as the actual signal processing taking place that takes the data from layer 5 and renders it into audio signals then distributed to Layer 2, where the local audio reproduction tools treat them.

As said before, all the control dials (gains, sources’ positions, etc.) are routed to be working with MIDI controllers; a patch found on the Max Help blog published by the University lecturers was used to make the program learn from the controller and automatically select the knob or slider to be used for each dial on the interface.

Figure 2. Interface for the main patch of the ViMiC application.

1  http://www.jamoma.org

2. PETERS, NILS et al. A Stratified Approach for Sound Spatialization. In Proceedings of the 6th Sound and Music Computing Conference, Porto, Portugal, 2009.