35
years
v2_
 

Raumklang Reflections

During their one-week residency at V2 in March Zeno van den Broek and Robin Koek developed the virtual framework that will serve as the groundwork for further development of their work Raumklang. In this text they offer their reflections on the working process.

During our one-week residency at V2 in March we developed the virtual framework that will serve as the groundwork for further development of Raumklang. V2 is the Lab for the Unstable Media is an interdisciplinary center for art and media technology in Rotterdam, the Netherlands. We thank V2 for their support!

At the start of the residency we had four main objectives:

1.) Developing the fundamental visual architecture for design of the sculptures
2.) Developing the first prototype of the sound engine
3.) Integrating physical interaction in the space
4.) Evaluating the interaction

For the first objective we required a flexible environment that would support nonlinear interaction and scalar dimensions allowing us to adjust the designs to physical measurements of the space of exhibition.

For this stage of the design process we have adopted the Touchdesigner environment because of its advanced functionality for visual programming, optimized for interactive workflows. During the residency we developed the core patch which allows us to draw a sculpture in a 2-dimensional space based on shape vectors.

The next aspect of the visual feedback was to introduce and relate the interactor in terms of proximity and angle in relation to the drawn sculpture. We initially devised a simulation which would track the distance and sightlines of the spectator towards the sculpture. We will elaborate more further in this report how we interpreted these sightlines in sonic terms and how a part of the research is a flowing back and forth between optical and auditory terminology.

The next stage was to develop a sound engine which would sonify the design crafted in the virtual environment, materializing the sculpture for the ears. For this initial phase of the sound framework we adopted MaxMSP because of its open-ended architecture and optimization for interactive prototyping. During the residency we developed the core patch for the sound of the sculpture based on different applications of granular synthesis. The motivation for this technique will be expanded on in the section 'materiality'. The developed patch resulted in a dynamic system relative to the visual lay-out and mapped to the position and rotations from the interactor for real time binaural projection of the sculpture.

At this point we arrive to the third objective, integration of physical interaction with the space. During our residency at V2 we made use of the tracking system from Usomo to measure the position and angle of the interactor moving in the space. The system is installed on the ground floor and is based on 4 beacons which interpolate to define the relative position of the user based on an individual tag. The system works with a gyroscope mounted on the bracket of a headphone, which is integrated with the tag, to define the rotation of the head. This information is sent as one package over a bluetooth network to be processed. The system is very robust, precise and has a low latency which made it great for a week of prototyping the interactions in space.

Connection and interpretation
For the over-all connections between the tracking data, the sound engine and the visual representation of the sculpture we used the Open Sound Control network protocol which runs over UDP on an arbitrary port. The raw data coming from the sensors had to interpreted to relate to the sculpture, these transformations where done in Touchdesigner, to convert the x and y coordinates and angle to a sightline and proximity of the user relative to the virtual sculpture, based on the actual dimensions and physicality of the room. The sensorial information coming from Usomo sensor has very high resolution up to 9 floating points, we noticed during tests to reduce the latency over network it was best to bring the precision back to 2 floating points, which was still accurate but provides lighter data packages that give more stable feedback on the body movement.

Orientation
One of the main research questions of Raumklang is, how does one orient towards auditory sculptures ? How do we realize a sense of physicality and materiality with a projection solely on headphones? One concept we introduced during this residency was the concept to represent orientation towards the sculpture based on an auditory 'sightlines'. This sightline represents which part you see (c.q. hear) of the sculpture based on your viewing angle. For this sightline we combined the concepts of optical stereoscopy, where one eye has an off-set to the other and the combination of the pair gives the full stereoscopic spatial image together with the psycho-acoustic concept of the inter-aural time delay, based on the distance between our ears there is a short delay (delay time based on skull and shape of ears) which is vital to our sense of depth and direction in a sound image. Based on this idea the sightline is represented as a triangular vector, in which one point is the spectator, and the other 2 points define where the sightline meets the sculpture. The distance between these 2 points then a simulation of recreating these biological offsets in our perception. It must be noted that this is all pseudo-scientific and mainly inspired by concepts we understand from spatial cognition!

The next element which is important for orientation towards the sculpture is the radiation of sound relative to distance, in other terms, the amplitude of sound based on proximity to the sculpture. During the residency we experimented with different curves for scaling the loudness to distance. We found that exponential and quadratic curves give more realistic results over linear amplification, which makes sense if we relate back to real acoustics where linear amplification does not occur. We also experimented with table functions drawing custom slopes, this felt the most precise as we could really tailor this to our evaluation.

Iterating on directional perception and simulating a material volume in binaural space an important aspect is to include filtering relative to head rotation, as frequencies behind us are absorbed relative to our skull and based on the shape of our ears. Commonly this topic is addressed by introduced an HRTF or Head-Related Transfer Function to the spatialization, which convolves the sound with a 3-dimensional plot of spectral attenuation measured in the inner ear (pinna) to render a realistic spatial image. One challenge with this technique is that it can be quite cpu intensive. This is one of the reasons we decided to approach this same concept with angular low-pass filtering relative to the sculpture based on orientation. The implementation will be addressed in the next paragraph which describes simulating a sense of materiality.

Materiality

Next to cues for orientation another important challenge for the sculptures is to evoke and suggest a sense of sonic materiality. Our goal is to not only create virtual points where sound is emitted but also volumes, lines and other shapes of spatial sound that are more related to vector shapes than to point sources. These spatial forms will relate directly to a sense of materiality: walls can have a certain thickness, they can block sounds from other sources and have various acoustic characteristics.

As the sound material of the sculptures will be based on acoustic recordings of space the in situ, we came to the conclusion to simulate these volumes and have dynamic manipulaition we will need a technique that allows to freeze a certain instance of the recording, based on your position towards the sculpture.

During initial research in Copenhagen we compared projectional quality and realism of of phase vocoding versus granular time stretching to see which had the optimal result for navigation of the original recording. We came to the conclusion that for this project granular synthesis is the most suitable technique based on the ability to preserve transient and thus spectral clarity (which is essential for orientation - see paper Headphone-Based Spatial Sound) within a small window while still keeping a clear reference to the recording.
 
This bring us to the heart of the sound engine and our initial approach to materiality: the 2-dimensional sculpture drawn in the virtual environment, initially represented by a line with breakpoints is scaled to a granular engine which allows to scrub through the recordings based on the 'sightline' of the user. The granular engine is based on two individual granular buffers in order synthesize the 2 individual points where the sightlines cross the sculpture from the angle of the spectator. The deviation between the points is translated in an offset in read points related to the same buffer, as it is one material. By relating the distance between the left and right ear to the sculpture we implemented channel-based amplitude scaling introducing an audible visibility for left and right.
 
During the first days of the residence we experimented with this concept and realized a first prototype which allowed to move through the space and scan through the recording placed in space, based on head rotation and distance mapped to amplitude. The sense of materiality is simulated by having a different segment of the same structure at each viewing point. This already resulted in very pleasing first experiences, which confirmed our development in the right direction. At the same time it put forward some questions, for instance: what happens if one is facing the sculpture, moves towards it, and then crosses it? Is the sculpture still perceivable behind you or does one first have to rotate and have the object in auditory sight again to witness it?
 
The latter implementation is the current one we devised in our prototype, it has the advantage that it really marks certain boundaries and invites the user or sharpen their orientation and exploration, but it has a strong disadvantage the sudden disconnection which happens moving through the sculpture which feels as if your senses are being tricked, which makes sense as applying a concept of optical orientation to the auditory cognition should have some drawbacks.
 
After some brainstorms focussed on the perception of this version of the prototype we decided to introduce columns to the sonic architecture: whereas the line itself could be seen as a wall of sound we created stationary points of sound in space to enhance the spatial orientation through sound. At a few logical points of the wall we introduced columns, relating to physical architecture. The columns can be thought of as the lighthouses by the sea which ships use for their orientation, in the installation they function as sonic beacons that also support in understanding the dimensions of the sculpture. These columns of sound use a separate granular synth with specific parameters which emulate a static but fascinating sound. These sounds are then filtered radially based on the angle of the spectator. We experimented with various sound sources for the columns: when using the same source material as the ‘wall’ of sound they function as a support for the orientation of the user. Using other sound material creates very different and exciting experiences, the positioning of the columns in relation to the sculpture also gives many more options to be explored. We will have to continue this research at a later stage.
 
Another aspect to the creating a sense of materiality is also within the visual representation and complexity of the sculpture. In Touchdesigner the 2-dimensional shapes of the lines can be modified with mathematical functions like gradual noise distortion to create a sense of materiality, c.q. to apply a curvature or certain roughness. As well it is possible to work with opacity to create a translucent quality to the material. These properties could than all be mapped to the sound engine, this will be part of the research in the next stage of development.
 
A final aspect to topic of materiality is the introduction of occlusion, which will be important when we work with different sculptures one space. The occlusion implies that one object is relatively filtered and attenuated to another object based on your position, simulating absorption between materiaal, with this technique we could suggest more realistic depth and entity of the sculptural space. This will be implemented at a later stage in the system.

Evaluation of interaction

On the final day of the residency we had the possibility to get feedback from three users with a variety of backgrounds. This was very useful because their sensory perception of the auditory sculpture was less biased by knowing the techniques and goals of our development. We received many interesting reactions on this first experimental sculpture: especially the ‘reading’ of the wall of sound by use of auditory sightlines resulted in some confusion and sensory disconnection. The perception of the stationary columns was quite successful, even with the use of our simple filtering and not the CPU-expensive binaural filtering.

Future

The next phase of Raumklang will be to refine the sculptural orientation and include the research path of working with recordings of the acoustic space as source material. We will be moving from the current set-up which is still running on desktop machines to a mobile set-up based on a Raspberry PI and the Pozyx tracking system, which allows users to freely navigate the space. This development will be done mainly during our residency at Steim in Amsterdam mid-June

 

Below a screen capture of the navigation of the visual feedback of the final prototype.

 

 

Raumklang - V2 residency from ZenovandenBroek on Vimeo.

Robin Koek & Zeno van den Broek
Text from http://cargocollective.com/raumklang/V2-reflections

Document Actions
Document Actions
Personal tools
Log in