The DataCloud 2 interface reflects the spatial arrangement of objects after the knowledge-map and mental-map
methods have been applied. A connection is first established between
the inserted objects and the 'scientific' representation of data. The
dynamic, virtual environment of DataCloud incorporates elements of
studies into conceptual spaces (known from scientific
visualisation) combined with direct representations of data-objects
(media including references and context) and their interrelation. The
DataCloud 2 research focuses on some major topics deemed relevant to enhanced audience participation.
Inspiration for the spatial organisation of the objects in DataCloud 2
has come from research in the fields of design, architecture, scientific
representation of atom structures, games, virtual reality, and 3-D
PREMISE FOR NAVIGATION AND USER INTERACTION
In the DataCloud, participants can browse, explore, move, drag and
drop, and participate in various modes. Exploring and navigating the
DataCloud involves several steps in content discovery. Remote objects
reflect the media-objects, which are represented as textures; approaching an object reveals more information
and context in several steps; zooming close to an object reveals the
object in its original form. When a media-object is revealed in its
original form, a shift takes place from 3-D to 2-D representation.
The general view of the DataCloud starts from a 'third-person' perspective.
The computer screen functions as a camera viewfinder through which
objects are viewed. The user can interact continuously using camera
functions like zooming, riding, gliding, and panning, etc. However,
co-authoring the dynamic map of the DataCloud using this mode of
operation - familiar from games, flight simulators and 3-D panoramas -
wouldn't work from a third-person perspective or spectator's point of
view. The DataCloud editing and (co)-authoring actions require a first-person perspective.
The shift from third- to first-person perspective can be understood as
an extension of the spatial or temporal navigation in order to achieve
In the current phase of the project, a trajectory that relates the
authoring section and the 2-D planes is being considered. A fluid change
of perspective could support all interactions. A simple mechanism
allows the content to be browsed and explored and a set of media controllers facilitates editing and adding
content. This will most likely result in a mixture of known mechanisms
from older media (photography) and new media controllers that do not
have equivalents in the physical world.
Mechanisms derived from older media like film could offer the DataCloud user a 3-D and third-person perspective, and the new media controllers could supply the tools
needed for first-person participation. The latter requires easy-to-use,
functional tools (mostly 2-D) that support media-based pictures, text,
sound, etc. A separate GUI section for auditing or collecting material
will be considered for the final version of DataCloud 2.
ABSTRACTION AND REPRESENTATION
The survey of conceptual spaces and scientific visualisations, as
opposed to the direct representation of the media-objects, in the
DataCloud raised interesting questions regarding symmetry and
consistency, and 'make-believe' scenarios for the user. Special
emphasis was put on an abstract 3-D interface concept to avoid a
reliance on inadequate or inappropriate metaphors borrowed from the
physical world. This should be considered as a logical progression
from DataCloud 1 DWHW, which was based on 2-D Shockwave.
The atom-like structures deployed for the visualisation of
abstract scientific research and processes formed the starting point
for the representation of media-objects. Besides rapid performance,
this reduced form of representation provides insight into the DataCloud structure, its configuration, and how the objects relate to one another.
A sample of the media-object is used as a texture on the
'outside' of the atom. This texture provides a first glimpse of
information and allows the objects in the DataCloud to be
distinguished from one another. The meta-data is available around the
media-object representation. After zooming close to an object, this
level of abstraction ceases and the user can view the object in its