The Notational Shift

'The Notational Shift, from a linear-text-based to a three-dimensional-spatial way of data perception' is a paper by Rense Frommé about visualization (2003).

The ways we communicate and express ourselves in music, arts and science is very much dependent on the formal languages in use. For monks in the 11th century, their formal language, music, was associated to the linguisitic structure of the phrase sung to God. Throughout the twelfth, thirteenth, and fourteenth centuries the mechanics of notation were in a state of rapid change, produced and paralleled by an evolution in musical style, the progress of which lies mainly in the field of rhythm. This resulted in the intricate rhythmic structures of the School of the Notre Dame of the 14th century, where music was composed by the calculated partitioning of its time dimension. This mathematical, representational system could not have been understood, or even imagined by the monks of the 11th century. Julie Tolmie, one of the leading scientists in the emerging field of data visualization, used this historical analogue in her introduction of the workshop Data Perception to show that we are on a similar treshold. According to Tolmie we are currently, in the mastery of the new visual notation space, closer to the monks of the 11th century than to the School of the Notre Dame.

Just like musical notation, data perception is a human, cultural construct. We recognize the possibilities of percieving data, by reflecting our immersed behavior in abstract and distributed spaces, but we don"t even begin to understand the consequences. A small subset of modalities crucial to our experiences is related to navigation, retrieval and perception. The future mastery and knitting of these modalities or qualities into a single informative experience is likely to change our data perception. It took 300 years from the beginnings of a spatial notation for western music untill the beginnings of a notation for rhythm. Conventions for data perception are likely to take just as long to develop on a relative timescale -- especially since they would challenge the immediacy of the image by encoding many layers of sophisticated intertwined representation that would need to be acquired or learned, whether by human or machine.

The workshop tried to grasp some parts of this notational shift and the impact it will have upon conventional techniques of visualizing information, such as 2, 2.5 and 3 dimensional environments, dynamic or static information, familiar geometries or abstract topologies, mapping data spaces or assigning metadata. Speakers presented their own creative solutions, mostly driven by the nature of data itself. The first step in the paradigm shift away from the traditional, linear text-based model of dealing with visual information was proposed by Ben Schouten.In his presentation, he addressed the urgent need for intelligent visual information retrieval systems. Most of the current systems are based upon the theoretical assumption that visual signification cannot be done without natural languages. This tendency in modern thought even received a special label "verbocentrism" -- for instance, while Roland Barthes stimulated the interest in visual semiotics with his pioneering articles published in the late 1950s and early 1960s, he simultaneously strongly questioned the possibility of an autonomous visual language.

In order to explore visual information by visual means, you first need to develop a system in which recognition has its place, according to Schouten. Image recognition is based on having seen something before. It relates an emotion, experience or visual input to an earlier event. Instead of using keywords to extract the meaning of an image, a more intelligent way of looking for simularities is required, based on visual features and concepts. To bridge the so-called "semantic gap", we could use new, interdisciplinairy approaches.

Sheelagh Carpendale, in her research, tried not to explore specific characteristics of data, but instead investigated presentation space independent of specific representations. Using the limitations of the available display space on a computer screen, she developed Elastic Presentation Space (EPS); a generalized framework to master different kinds of data through the inclusion of more than one presentation method in a single interface. EPS offers for example different lenses (round / square / fish-eye) to view or zoom into different kinds of data. The result is striking in its elegance; a stretchy information space which allows you to play around with different 2D and 3D representational techniques on for example, a geographical map. Scrolling over different areas resulted in different forms of magnification, providing detail, while maintaining the spatial context of the complete representation. Another interesting application allowed users to focus into 3D cubic graphs and disentwine relations/lines between nodes in the graph. These applications look very promising for disseminating and visualizing large amounts of various data and object-relational archives in a user-friendly way.

So, by using methaphors, EPS succeeds in bridging different contexts to create new forms of data spaces to which we can adapt easily. However, as we explore the possibilities of synthetic and networked environments and take into account its specific characteristics, metaphors become insufficient. The ocean-like feeling generated by endless data spaces brings about yet another set of navigation and retrieval obstacles unknown from traditional representations. The context and relations between the objects seem crucial here. However, in a virtual environment in which the user participates and is actively involved in the environment its co-creation process, these contextual parameters become unpredictable. These problems were adressed in the project presentations by V2_ and C3. Here, augmented aspects of perception come into play, indicating yet another phase in the shift towards new spatial information languages.

Brigit Lichtenegger of V2_ presented the project Datacloud 2, a 3D collaborative information environment. In Datacloud the user can add different media objects, which are related to each other by their metadata. The entire underlying database is visualized in a so-called object space. Depending on the user"s interest the cloud can be reorganized so that objects that have a lot in common will be closer to each other than less related objects. Users require different means of navigation across variable distances, while perspective may also hide information from vision. Datacloud raised questions on how to incorporate (or expand on) human navigational skills in the navigation of the data space. Datacloud beautifully showed that new methods have to be developed in realtime 3D visualization, methods that are dramatically shifting the paradigms of content-rich and well-organized information spaces from traditional (meta)data structures towards context-sensitive systems.

Heralds of such context-sensitive systems were proposed by Márton Fernelezelyi and Zoltán Szegedy-Maszák from C3. Their Demedusator and Promenade projects showed interface solutions for mixed (data) realities, the visitor navigates the environment by physical movement in a "tracked" space. The two realities are interfaced in a threedimensional way. It first started as a web project, but soon developed into an installation with a stereoscopic projection to visualize the dataspace, in which objects could move dynamically. However, navigation turned out to be problematic for unexperienced users. To study and improve the collaboration between users, they are now working together with neuroscientists on a new project, called Camouflage, to develop an augmented reality experience.

Data Perception touched on a lot of promises and problems the young discipline of data visualization has to offer: the evolution of abstract notation systems, navigations through abstract 2D, 2,5D and 3D data spaces, the play of the mutable modalities in new media and technologies, the challenge of defining recognizable visual linguistic elements. The workshop showed that these topics are all intimately related. Encoding these interrelationships directly or indirectly into computer based environments is non-trivial, but already happening. Imagine that we have an integrated environment with far more subtle mappings than straight data analysis/visualization. How would artists then challenge the operational metaphors? Would user driven multiple modalities in an integrated perceptual environment enable a significant change in the cultural practice?

As Julie Tolmie showed in her presentation, conceptual and representational shifts in notation and thought often came out of the arts. The visual artifacts she presented, evolved from collaboration with mathematicians, with visual artists/performers and with those working in data perception/visualization. Only through communicating across the borders of each discipline, can we look beyond the linguistic constraints towards a visual language. So, as the untaught monks of this age, we all need to artistically play around with data, in order to acquire this new language.



Document Actions
Document Actions
Personal tools
Log in