Body Language represents the second generation of interactive sound installations Rokeby created. The installation used three hand-built low-resolution (8x8 pixels) video cameras to observe a 5 metre by 5 metre space. The images from the cameras are digitized by a system that used 3 6502 processors in parallel. This information about motion was relayed to an Apple ][ computer. The computer searched for movements in these images, analysed them and created sounds in response, simultaneous to the movement itself. The sounds were produced by custom software running on a Mountain Hardware Digital synthesizer sitting in the Apple ][. Any movements made by people within the space create sounds whose qualities reflected the qualities of the movements.
David Rokeby's electronic installation will link up the Roberson centre for the Arts and Sciences in Binghamton, New York with Salerno, and later, the National Museum of Science and Technology in Ottawa with the Canadian Cultural Centre in Paris to create a musical interpretation of signals emanating from body movements. These installations use computers and video cameras to analyze motion, and translate their perceptions into sound by controlling a synthesizer. Each installation will relay, through telephone lines, the significant aspects of the movements taking place within their space, and receive similar information from the other installation. Movements unique to one city’s installation will be characterized by sounds identifiable with that installation. This will enable participants at both locations to produce collaborative sound sequences in real time. Cooperation will register audibly. Similar movements in both locations will produce more interesting and provocative sounds.
There is the magic of childhood and wonder of distant communication
in this performance which gives one a tangible feeling of ‘touching the
other side of the planet” with the electronic extension of one’s own
bodily presence. (from the SAI Brochure, 1986)
Two identical systems are linked via the Internet. Each system monitors
their local space using a Kinect sensor and sends a spatial
representation of participants to the remote location. This
representation is virtually projected into space as a structure of
interactive sound possibilities. When participants in one location come
into contact with the virtual body of participants in the remote space,
they hear the sounds of that interaction.
International Feel by David Rokeby (2011) from V2_ on Vimeo.