Skip to main content

Designing a tangible tabletop installation and enacting a socioenactive experience with TangiTime

Abstract

Contemporary computational technology (tangible and ubiquitous) are still challenging the mainstream systems design methods, demanding new ways of considering the interaction design and its evaluation. In this work, we draw on concepts of enactivism and enactive systems to investigate interaction and experience in the context of the ubiquity of computational systems. Our study is illustrated with the design and usage experience of TangiTime: a tangible tabletop system proposed for an educational exhibit. TangiTime was designed to enable a socioenactive experience of interaction with the concept of “deep time.” In this paper, we present the TangiTime design process, the artifacts designed and implemented, in its conceptual, interactional, and architectural aspects. Besides that, we present and discuss results of an exploratory study within an exhibition context, to observe how socioenactive aspects of the experience potentially emerge from the interaction. Overall, the paper contributes with elements of design that should be considered when designing a socioenactive experience in environments constituted by contemporary computational technology.

Introduction

Recent evolution in computer-based technology and devices has brought new possibilities of making real Mark Weiser’s dream of ubiquitous computing [1]. With it, new interaction paradigms based on gesture recognition devices, wearable, and tangible objects have challenged the conventional interaction models in the design of systems. Nevertheless, the new scenarios created with today’s ubiquitous and pervasive technologies are still demanding ways of understanding the experience with such computational systems. We argue that emerging ideas of enactivist cognitive sciences provide new perspectives to understand the nature of the human experience when interacting with technology and the environment created through it.

In the Human-Computer Interaction (HCI) field, Dourish [2], coined the term “embodied interaction” to refer to research ideas around tangible, social, and ubiquitous computing. HCI research on this topic has drawn on the seminal work of Suchman [3, 4] and Winograd and Flores [5], which have basis on Phenomenology. The embodied interaction has since been focusing on the use of space and movement to manipulate information technology-enhanced objects. By embodiment, Dourish [2] refers to the way that physical and social phenomena unfold in real time and real space as a part of the world in which we are situated, right alongside and around us.

The notion of embodiment in tangible computing exploits our tactile and physical skills with real-world objects. For instance, tabletops or interactive surfaces are a genre of tangible user interface (TUI) artifacts on which physical objects can be manipulated and their movements are sensed by the interactive surface [6, 7]. Literature has shown that tabletops have the potential to support collaborative learning [810], facilitating engagement [11], and the understanding of abstract concepts (e.g., [12, 13]), by enabling embodied interactions with physical objects and materials.

Embodied interaction and embodiment are concepts rooted in Phenomenology and the enactivist approach to Cognitive Science. As so, in this work, we draw on Varela et al. [14], who presented enactivism as a new form of cognitive science, which the authors argue that would provide the ground for a science both embodied and experientially relevant. In formulating the enactive approach to cognition, Varela et al. drew on the concept of embodied cognition (i.e., how sensorimotor interactions with the world shape cognition). They consider the lived body as a single system that encompasses body, mind, and environment. Thus, cognitive processes belong to the relational domain of the living body coupled to its environment.

Drawing on this theoretical background, the project which hosts our work [15] proposes to look at the person-environment coupling in scenarios of contemporary technology, specifically bringing its social-physical-digital tripartite articulation into design considerations. This tripartite coupling is being studied in a developing concept named socioenactive, based on the enactive approach to cognition.

In this work, we investigate the socioenactive experience of interaction in the context of the ubiquity of computational systems, by designing and experiencing TangiTime: a tangible tabletop proposed for an educational exhibit on the abstract and complex concept of “deep time.” We designed and developed five physical objects to interact within the exhibit and we embedded ubiquitous technology such as microcontrollers, sensors, and actuators in three of them. In contrast to the implementations found in literature, embedding technology inside the physical objects allowed users to interact with TangiTime outside the tabletop display and continue to receive feedback on the physical object itself. Also, the user can interact with one physical object which generates feedback response in another physical object.

The contributions of this paper are twofold: to bring the enactivist approach to the design of contemporary technology, materializing it into a system design; and to observe the interaction on it and the experience enabled by the social-physical-digital coupling in the created environment. By presenting the TangiTime design we illustrate, in a practical way, our understanding for designing scenarios of ubiquitous technology, drawing on the enactivist theoretical background; as so, we aim at contributing to the theoretical and practical issues of the design of contemporary computational systems.

The paper is organized as follows: The next section presents the main concepts of our theoretical background and some related work regarding tabletops design within educational contexts. Next, we present a case study describing the installation design, preliminary exploration of it in a public educational exhibit, and results of the experience brought forth with it. Finally, we present a discussion on the main findings regarding promoting a socioenactive experience with contemporary systems and takeaways of it.

Background and related work

In this section, we first present a theoretical background about the enactive approach to cognition and enactive systems to show recent perspectives of enactive cognitive sciences. Next, we present a theoretical background on tangible tabletops and related work that use this technology in their design. Finally, we synthesize comparatively the related work, to explore the design space for future computational systems.

The enactive approach to cognition and enactive systems

The enactive attribute was originally associated with cognition by the developmental psychologist Jerome Bruner, who used it to refer to bodily and spatial activity as an aspect of cognitive development “learning by doing” [16]. Bruner describes three possible types of knowledge used when interacting with the world: symbolic, iconic, and enactive. Symbolic knowledge involves conceptualization and abstract reasoning, iconic knowledge involves visual recognition and the ability to compare, and enactive knowledge is constructed on motor skills.

In “The Embodied Mind” book, Varela et al. [14] introduce a new form of cognitive science named “enaction,” by studying cognition as embodied action. By “embodied action” Varela et al. [14] mean that “cognition depends upon the kinds of experience that come from having a body with various sensorimotor capacities and that these individual sensorimotor capacities are themselves embedded in a more encompassing biological, psychological, and cultural context.” Thus, the point of departure in Varela et al.’s enactive approach is the study of how the perceiver can guide his/her actions in his/her local situation. Since these local situations constantly change as a result of the perceiver’s activity, the reference point of understanding perception is no longer a pre-given world, but rather the sensorimotor structure of the perceiver. Some concepts that constitute the basis for this “enactive approach” to cognition are autonomy, sense-making, embodiment, and experience [17].

As an alternative to the standard Human-Computer systems relation, Kaipainen et al. [18] proposed the concept of “Enactive System” based on the more recent discourse of embodied mind, in which the human mind is fundamentally constituted by dynamic interactions of the brain, body, and the environment. For the authors, an enactive system consists of a dynamic mind-technology embodiment in which the enactive relationship conceives the technology as continuous, ubiquitous, and intelligent accompaniment to the human actor or a direct extension of the user’s perceptual and cognitive apparatus involved in participation in the system.

In their work, the authors illustrate the main ideas with an enactive cinema that modifies its sequences of images according to the viewer’s physical reactions. This installation relied on the tracking of the spectator’s real-time physiological responses, such as heart rate and electrodermal activity, which controlled a montage machine that dynamically recombines content elements from a database into a narrative, which again influences the spectator’s experience.

The scenario presented by Kaipainen et al. [18] is limited to the individual coupling of the enactive cycle with the system. Cultural and collective aspects of the experience are left undiscussed. The project which hosts this work (“Socio-enactive Systems: Investigating New Dimensions in the Design of Interaction Mediated by Information and Communication Technologies” (FAPESP Tematic Project 2015/16528-0)) propose to fill the gap by studying and developing research scenarios for the design of computational systems focusing on social and cultural aspects of the experience with enactive systems [15].

Tangible tabletops and educational contexts

Tangible and ubiquitous computing technologies offer opportunities for physically interacting with objects, foregrounding the role of the body in interaction and learning. Tangible interaction can be combined with digital displays to create tangible tabletops [6, 7]. In a tangible tabletop, physical objects can be manipulated on the tabletop display and their movements can be recognized by it. The objects require markers attached to their base to be detected by the system, whose output is displayed on the tabletop display.

Literature has shown a large body of research work exploring the educational benefits of using tabletops within learning environments mainly as highly supportive systems for collaboration and interaction [8, 9]. For instance, in informal contexts as museums, tangible tabletop exhibits provide visitors the opportunity to experience social interaction and to access knowledge playfully (e.g., [1922]). On the other hand, within formal educational contexts, students can benefit from learning experiences that enable them to explore abstract concepts such as probability [13], astronomy [23], or artificial neural networks (ANN) [12].

With the proliferation of ubiquitous technologies, new opportunities to create learning experiences with tabletops and technology-enhanced objects have emerged. Besides, Internet of Things (IoT) technology is allowing everyday objects to communicate among themselves and with their environment and change their behavior according to network information [24]. Building on this notion, we conducted an exploratory literature review for related work that uses tangible tabletops and ubiquitous computing in their design.

For instance, Chu et al. [20] present Mapping Place, a tangible tabletop museum exhibit which draws on tangible narrative to explore African notions of mapping history through the construction of stories. Visitors can place five tangible shells on the tabletop display to select story elements projected around each one. The selected story elements are reflected as animations on a wall adjacent to the tabletop and help people visualize and share their stories. Ma et al. [19] present Plankton Population, a tangible tabletop museum exhibit to look at the proportion and type of phytoplankton in the oceans, manipulating three physical rings as magnifying glasses. A projection on the tabletop display is a visualization’s timeline that shows patterns of colors representing the types of phytoplankton. The authors compared the behavior of museum visitors at an interactive exhibit that used physical versus virtual objects. They found that the physical rings better-afforded touch and manipulation compared to the virtual rings. Loparev et al. [22] present BacPack, a tangible tabletop museum exhibit for exploring bio-design (e.g., genetic programs). The visitors take on the role of astronauts and they can manipulate 22 objects that represent sequences of DNA to engineer bacteria in order to survive. The authors compared two versions of the exhibit: one with tangible objects and one with virtual objects to represent the genes. They found that the tangible objects created opportunities for collaboration beyond those afforded by the multitouch only version. Bérigny et al. [21] present Reefs on the Edge, a tabletop museum exhibit for climate change education. The exhibit engaged visitors with a visualization of natural phenomenon as coral spawning and five tangible objects with embedded devices to produce feedback on the objects themselves. The tangible objects could change their colors due to the multi-colored LEDs that they have incorporated. In addition, the exhibit used video screens to display videos about coral reef ecosystems.

A research project most related to our work is outlined by Raffaele et al. [12]. They developed a tangible educational tabletop for the teaching and learning of artificial neural networks by manipulating tangible objects with embedded technology. The tangible objects can change their color or be provided with movement by the actuators that they have incorporated. The table projection presents students with a sectional layout highlighting specific areas to interact on the tabletop display.

Although exploratory, our literature review hardly found tangible exhibits that explore more complex and abstract domains and incorporate ubiquitous technologies in their design. Moreover, the low cost, variety, and internet capabilities of current devices offer opportunities to make physical sensing accessible for the construction of those scenarios.

Design space of related work

In this section, we compare design choices of the identified related work using a taxonomical design framework developed by Melcer and Isbister [25, 26] that outlines key methods for incorporating embodiment into the design of embodied learning systems. A design framework is an important HCI tool that provides a common language for designers and researchers to conceptualize variants of particular technologies and formalize the creative process [27]. In particular, taxonomical design frameworks treat a set of defined taxonomic terms as a set of orthogonal dimensions in a design space, and the resulting matrix provides structure for classification and comparison of designs [27]. This comparison aids us in determining how embodiment related concepts occur in the identified related work and suggests the application of specific design choices in future systems.

The design framework consists of seven dimensions organized into three groups (i.e., physical interaction, social interaction, and the world where interaction is situated). The seven dimensions are physicality, transforms, mapping, correspondence, mode of play, coordination, and environment. They are briefly described as follows:

Physicality dimension describes how learning is physically embodied in a system and consists of five distinct values: direct embodied, enacted, manipulated, surrogate, or augmented; transforms dimension describes the relationships between physical or digital actions and the resulting physical or digital effects in the environment (e.g., physical action to physical effect (PPt), physical action to digital effect (PDt), and digital action to physical effect (DPt)); mapping dimension describes the different spatial locations of output in relation to the object or action triggering the effect (e.g., discrete, co-located or embedded); correspondence dimension refers to the degree to which the physical properties of objects are closely mapped to the learning concepts (e.g., symbolic, indexical or literal); mode of play dimension specifies how individuals socially interact and play within a system (e.g., individual, collaborative, or competitive). Coordination dimension highlights how individuals in a system may have to socially coordinate their actions in order to successfully complete learning objectives (with other players or in a socio-collaborative experience with digital media typically in the form of non-player character (NPCs)). Finally, environment dimension refers to the learning environment in which the educational content is situated (e.g., physical, mixed, or virtual).

In Table 1, we present 12 tangible exhibits (R1 to R12) including our proposal: TangiTime (R12). Melcer [26] made a comparative study of works R1 to R6 using their proposed framework. We extended the initial comparative study (R1 to R6) adding works resulting from our exploratory literature review (R7 to R11) and TangiTime (R12). The works are named as follows: R1: Eco Planner [28] (R1), Futura [29] (R2), LightTable [30] (R3), NanoZoom [31] (R4), Youtopia [32] (R5), Touch wire [33] (R6). Mapping Place [20] (R7), Plankton population [19] (R8), Reefs on the Edge [21] (R9), BacPack [22] (R10), and De Raffaele et al. [12] (R11).

Table 1 Design comparison of related work, extended from [26]

According to the design comparison, the most related works have the following attributes: manipulated, physical action to digital effect (PDt), co-located, symbolic, collaborative, other player, and virtual. For instance, most reviewed related works use physical objects as manipulative (physicality: manipulated); the manipulation of physical objects on the table surface results in only digital projections on the interactive surface (transforms: PPt to PDt); the visual markers attached on the object’s base are used as inputs and projections on the tabletop display are contiguous to the physical objects (mapping: co-located); physical objects correspond to symbols or metaphors that represent abstract signifiers to the learning concepts (correspondence: symbolic); social interaction is collaborative (mode of play: collaborative); and the educational content is situated in a virtual environment (environment: virtual).

In this sense, we designed and developed TangiTime: a tangible tabletop enhanced with embedded-technology objects proposed for experiencing deep time within an educational exhibit. In the next section, we present the design and construction of TangiTime with its tangible objects.

TangiTime—an exhibit environment for deep time

In this section, we detail the design process and the artifacts implementation of TangiTime. We first define “deep time” to create an understanding of the concept. Next, we describe the conceptual and interaction design model. Finally, we present the architectural model and software resources.

An abstract domain: deep time

Deep time refers to the time of geological processes, which are in the scale of millions or billions of years, as the case of the geological history of our planet of 4.5 billion years [34]. This scientific concept was introduced by the geologist James Hutton in the eighteenth century, arguing that the Earth age was not only a few thousand years old, but instead its age should be much higher [35]. The idea of disseminating such an important concept within geosciences emerges from the growing need of scientists to understand and discuss the geological and biological processes that are taking place today in our planet. However, deep time is a difficult concept to understand because it is not a simple task for a person to understand the magnitude of a period of millions or billions of years, when compared to our lifetime.

One way to exemplify this concept by some authors (e.g., [36]) is through the exploration of different geological eras, in which important biological and geological events occurred during the evolution of our planet. Some important events are described as follows: in the Archean Era life first formed on Earth. At the beginning of this period, the planet Earth was up to 3 times hotter than today, the first cells began to appear, the Earth was constantly hit by meteors and thousands of volcanoes were in activity. In the Proterozoic Era, one of the most important events was the accumulation of oxygen in the Earth’s atmosphere as well as the formation of a primitive ozone layer. The Paleozoic Era was a time of dramatic geological, climatic, and evolutionary change. Life began in the ocean but eventually transitioned onto land, and by the late Paleozoic, it was dominated by various forms of organisms. Common in the Paleozoic Era were trilobites, crinoids, brachiopods, fish, insects, amphibians, and early reptiles. The Mesozoic Era was important for the fossil remains of the dinosaurs and other reptiles that lived. It is also called the Age of Reptiles. The extinction of the dinosaurs at the end of the Mesozoic Era opened up vast new habitats and environments for early mammals and birds to adapt to and occupy the Earth during the Cenozoic Era.

Abstract concepts such as deep time are harder to understand because they lack the direct sensory referents that concrete concepts have [37]. Thus, as proposed for the learning context of other abstract concepts (e.g., [12, 22]), we argue that the use of tangible tabletops could help in the understanding of such a complex domain, making the learning experience with such concepts more engaging and meaningful.

Conceptual model

In order to investigate the socioenactive experience of interaction in the context of the ubiquity of computational systems, TangiTime was conceived based on the tangible user interfaces (TUIs) and the enactive approach to system design. TUIs augment the real physical world by coupling digital information to everyday physical objects and take advantage of human abilities to grasp and manipulate objects [6, 7]. Moreover, the enactive approach conceives the underlying technology as a continuous, ubiquitous and intelligent accompaniment to the human actor [18].

As mentioned in the domain knowledge section, one way to exemplify the deep time concept is through the exploration of different geological eras (Archean, Proterozoic, Paleozoic, Mesozoic and Cenozoic Eras) and the elements that characterize them. Thus, TangiTime simulates the passage of time over these periods through the random projection of images of different landscapes on the interactive surface. This implemented design is the result of a design process that evolved from three different prototypes [38]. In the implemented design, the first four geological eras (Archean, Proterozoic, Paleozoic, and Mesozoic) (Fig. 1) are displayed by the random process, while for the Cenozoic Era, a video projection shows a timeline according to how the geological eras occurred during the evolution of our planet.

Fig. 1
figure 1

TangiTime educational installation and its physical objects. The image shows a projection of background images on the interactive surface that represent landscapes of the geological eras, and five physical objects to interact with the installation

To interact with the tangible exhibit, we designed and constructed five physical objects that represent a natural component or a living organism that belonged to a certain geological era: a meteorite, a volcano, a dragonfly, and two dinosaurs (Fig. 1). For instance, in the Archean Era, a meteorite and a volcano were selected to represent volcanic activity and meteorite falls. In the Proterozoic era, the same meteorite was used to represent the meteorite falls, limited by a primitive ozone layer. In the Paleozoic Era, a dragonfly was selected to represent the presence of insects and to add movement in their wings. In the Mesozoic Era, the dinosaurs (a Tyrannosaurus rex and a Tricerátops) were selected to represent a dominant life form, they are probably the most popular dinosaurs for children. Each object was selected to create some digital simulation or a physical effect on them.

We called “TangiTime” the whole exhibit or installation, its physical and digital components, composing the environment opened to social coupling.

Interaction model

Figure 2 shows the dynamic of the interaction of a user or user groups interacting with TangiTime. Initially, one image that represents the environment of a geological era is randomly projected onto the interactive surface of the table. By paying attention to the characteristics of the projected landscape, users have to choose an object (or objects) that they believe belong to the environment projected on the surface, and grasp and manipulate them on the interactive surface. These perception-based user actions are represented in Fig. 2 as “embodied interaction”.

Fig. 2
figure 2

Dynamic of interactions in an individual enactive experience and a group enactive experience. The image shows the dynamic of interactions in an individual enactive experience and a group enactive experience with TangiTime

Users receive three types of feedback according to the physical object manipulated on the tabletop display: digital responses on the tabletop display, physical responses in the object itself, and sounds in the environment. Digital responses on the tabletop display are constituted by graphic projections such as background images, simulations, and digital representation of the objects. Physical responses in the object itself are made with the physical effects of using controllers, sensors, or actuators embedded into the objects. In this case, even manipulating the objects outside the tabletop display, user continue receiving feedback (e.g., a dragonfly keeps moving her wings when outside the tabletop). These different system responses are represented in Fig. 2 as “multimodal perception.”

If the physical object manipulated on the tabletop display belongs to the projected geological era, the random background display process is stopped to allow users to explore and interact with the exhibit in this era, and the system responses for the era are also activated. When the software system does not detect any physical object that belonging to the projected era, it projects another landscape and the users have to choose an object (or objects) that they believe belongs to the new landscape.

The system also allows users to interact with one physical object and generates feedback response in another physical object. For example, in the Mesozoic Era (Era 4), when some dinosaur is detected by the software system, his digital representation is projected, the sound of his roar is emitted and his eyes represented by RGB LEDs light up green. When one dinosaur is physically close to the other dinosaur, the eyes of both are illuminated in red, the sound of their roar changes and a digital animation representing the proximity between them is also projected. The digital representations of the dragonfly and dinosaurs move and rotate along with their physical objects.

Table 2 summarizes the interaction and system responses implemented for each geological era and object. Nevertheless, we should observe that in experience-based learning, the children learn as they explore the environment, based on their perception-guided action considering the whole environment feedback, which is provided for their (right or wrong) actions, others’ actions, effects on objects, etc. Issues related to the learning concepts explored in TangiTime are discussed elsewhere [38].

Table 2 Interactions and system responses in TangiTime

Architectural model

TangiTime was designed on the tangible user interface (TUI) architecture [6, 7]. Each physical objects to interact with the tangible exhibit has a visual marker [39] (fiducial) attached onto its base to be detected by the interactive surface and embedded technology (controllers, sensors, and actuators). As illustrated in Fig. 3, TangiTime exhibit consists of a low-cost tangible tabletop capable of detecting fiducial markers, and five physical objects to interact with the exhibit. Hardware and software resources are further detailed in the next sections as follows:

Fig. 3
figure 3

The TangiTime architectural model and components. The image shows the architectural model of TangiTime. The installation consists of a low-cost tangible tabletop capable of detecting especial fiducial codes and five physical objects to interact with the installation. All tangible objects have fiducial codes attached in their base to be detected by the interactive surface and only three of them have embedded technologies. The TangiTime system acquires images from the camera situated beneath the table, and searches the video stream frame by frame for fiducial codes. The TangiTime system detects the fiducial codes and controls the interactive visualization projected on the tabletop display, the behavior of the animations, and the bi-directional communication with the physical objects with embedded technologies through an IoT platform

Tangible tabletop

As illustrated in Fig. 4, for the low-cost tangible tabletop, we use a transparent surface that serves as an ideal projection screen to project visual feedback, with some tracing paper on the top side for the projection (Fig. 4a); an infrared camera to capture the fiducial markers on the tabletop display (Fig. 4b); a diffuse illuminator to light the surface with infrared light (IR) (Fig. 4c); a mirror to achieve a larger projection distance (Fig. 4d); a projector to display images and digital animations onto a mirror (Fig. 4e); and a computer with the software system that include the ReacTiVision framework and the Processing client (Fig. 4f); five physical objects (Fig. 4g); and a speaker (Fig. 4h).

Fig. 4
figure 4

Technological resources used in the tangible tabletop. The image shows the technological resources used to construct the tangible tabletop in TangiTime. a Projection screen, b web camera, c diffuse illuminator, d mirror, e projector, f computer, g physical objects, and h speaker

For the interactive surface, we used a glass screen of 95 cm by 95 cm and a mirror with a position angle of 45 grades to achieve a larger projection distance. The table surface was illuminated with infrared LED lamps because the computer vision component needs to operate in a different, invisible spectrum such as near infrared in the range of 850 nm. A camera situated beneath the table tracked the fiducial markers that are processed to determine the location, orientation, and identity of them. We choose a webcam model with a native resolution of 640 x 480 at a frame rate of 30 Hz. A webcam usually comes with an infrared filter that blocks out the infrared light from the outside, thus allowing only visible light to pass through. This IR filter needed to be replaced by an IR bandpass filter. We used a projector Dell 4320 to project the images and simulations on the tabletop display. The projector was located in front of a mirror to achieve a larger projection distance and on a slanted wooden box. We adjusted the keystone correction incorporated in the projector according to the angle of the mirror and the inclination of the box.

Tangible objects

We designed and constructed five physical objects to interact with the TangiTime exhibit (Fig. 5): a meteorite, a volcano, a dragonfly, and two dinosaurs, all with a different fiducial marker attached in their base. For the construction of the physical objects, we used 3D printing models to construct the volcano and the meteorite objects. In the dragonfly case, it was built manually using materials as wooden popsicle sticks, silicone, and wires. Its wings were printed on paper and were provided with movement through a mechanism built using a servo motor (Fig. 6 right). The dinosaurs were acquired toys with a soft structure that facilitated stuffing devices inside them (Fig. 6 left).

Fig. 5
figure 5

Physical objects to interact with the installation. The image shows five physical objects to interact with the installation. a Meteorite, b volcano, c dragonfly, d Tyrannosaurus rex, and e triceratops

Fig. 6
figure 6

Electronic components used into the physical objects. The image shows the electronic components used into the dragonfly and the Tyrannosaurus rex. The dragonfly has an embedded servo motor to move its wings, and the two dinosaurs have embedded RGB LEDs to light their eyes. The objects have a Wemos Lolin32 microcontroller and a small LiPo battery. We decided to use the WeMos Lolin32 microcontroller because it has a Lithium battery interface and offers WiFi and Bluetooth connectivity

Software resources

The TangiTime system consists of a tracking system to detect the fiducial markers and a client application to control the interactive visualizations projected on the tabletop display.

To enable the recognition of the fiducial markers, we used the ReacTiVision framework [39] that allows the conversion of tangible objects into digital representations. ReacTIVision is an open source, cross-platform computer vision framework for the fast and robust tracking of specially designed fiducial markers in a real-time video stream using an IR camera, and it is available on a public SourceForge site [40]. A camera situated beneath the table detects and processes the fiducial markers and this information is sent to a client application to create the simulations, to change the visualizations or to generate feedback on the objects themselves. The information of the tracking system is sent to the Processing client application using the TUIO protocol [41].

As a client application, we used the Processing environment [42], which is an open source programming language and environment to work with images, simulations, and sounds. It is an ideal platform for interactive installations and has hundreds of libraries provided by the Processing community that can be added to enable things like playing sounds, doing computer vision, and working with advanced 3D geometry. In this work, the Processing application controls the interactive visualization projected on the tabletop display, the simulations, and the bi-directional communication with the physical objects through an Internet of Things platform (IoT) (Shiftr.io [43]).

Besides that, we use the idea behind the IoT approach [24] to allow physical objects to transmit and receive data through an IoT platform and communicate among themselves. To this end, an Wemos Lolin32 microcontroller, actuators, and a small LiPo battery were incorporated into three physical objects (dragonfly and dinosaurs). Also, the dragonfly has an embedded servo motor to move its wings and the two dinosaurs have embedded RGB LEDs to light their eyes. We decided to use the WeMos Lolin32 microcontroller because it has a Lithium battery interface and offers WiFi and Bluetooth connectivity. The microcontrollers communicate wirelessly with the IoT platform that has the aim of interconnecting objects, devices, and apps through the MQTT [44] protocol. Table 3 shows the software resources used to implement the TangiTime system and the interconnection with physical objects.

Table 3 Software resources

TangiTime—use evaluation

In this section, we explore the use of TangiTime in (a) a preliminary pilot study to test the exhibit functionalities and (b) an exploratory study by a general audience, including children, their parents, and other visitors from the field of Geosciences to observe socioenactive characteristics of the interaction experience with the exhibit.

A pilot study

We conducted a pilot study with twelve people to check the software system and the installation. Among our participants, we had four professors and seven graduated students who interacted with the installation. The results of the pilot study helped us to enhance some elements of the scenario, as for example, to increase the size of the meteor and dragonfly images, to add keys for the LiPo batteries control, to paint the meteor to be more realistic, and to put a more fixed base to attach the fiducial markers on the dinosaurs.

An exploratory study

Considering the aim of designing TangiTime as a way to investigate the concept of enactive system in design and the practical concerns of the socioenactive experience, we conducted an exploratory study with participants of a public exhibition, to observe the socioenactive characteristics that emerge from the interaction experience with the environment. In particular, our goal for this exploratory study is to answer the following research questions:RQ1: What are the elements of interaction that emerge in the target scenario and how these elements can be categorized?RQ2: What are the elements that illustrate a socioenactive experience?

Context and participants

The exploratory study took place in the Geosciences Institute (GI) of the University of Campinas (UNICAMP) that, in association with the Exploratory Science Museum of the same university, developed activities for the scientific promotion of paleontology with meetings, lectures and posters about the area, and an exhibition of around 100 models of dinosaurs in miniature to promote discussions about their size, life habits, etc. TangiTime was invited to become part of these activities and was exhibited for 4 h. The installation was located in the exhibition space near the miniature dinosaur exhibit. During the time exhibition, three researchers were responsible for observing the participants’ action and for the video recordings. From the video recordings, we observed that 66 users interacted with the exhibit (35 adults, 19 teenagers and 12 children), grouped into 22 groups of visitors; among them, we found groups of parents and their children, school children, adults and children, children interacting alone, children interacting in groups, and groups of adults interacting together. The activity was associated with the research project “Socio-Enactive systems: Investigating New Dimensions in the Design of Interaction Mediated by Information and Communication Technologies”, approved by the Research Ethics Committee of the University of Campinas (CAAE 72413817.3.0000.5404).

Method

TangiTime was located in the exhibition space near the miniature dinosaur exhibit (Fig. 7). The tangible objects were connected to Wi-Fi and placed on the table surface on the border of the area projected for interaction. Visitors were free to interact with TangiTime; however, a researcher acted as facilitator and was responsible for encouraging the visitors to take some of the five tangible objects, grasp, and manipulate them on the tabletop display. As a research conducted “in the wild” (i.e., in a real public space of the Geosciences event), the research protocol for the researchers and facilitators interaction with the participants did not follow a set of predefined steps; they act as they observe participants’ action or participants’ demand on specific questions regarding the installation and its underlying content. Thus, data for analysis comes from the researchers in locu observations and on data recorded in video along the activity. We used a video camera on a tripod to record interactions on and around the exhibit for post analysis and further investigation.

Fig. 7
figure 7

Screenshot of the (digital) graphical user interface. The image shows a graphical user interface with a digital representation of the Tyrannosaurus rex and the triceratops

As for the data analysis, besides researcher notes on local observations, we analyzed 2 h of video recordings based on the Grounded Theory method [45]. It is a qualitative analysis method to construct a theory from data analysis by coding and categorizing patterns, behavior, or other issues that emerge from the data. We used this method driven by the desire to capture facets of the collected data and to allow the findings to emerge from the data. No data about the identities of participants were collected.

Results and discussion

In Table 4, we present codes and categories extracted from the video analysis. First, we analyzed the video recordings by taking notes of the observed interactions and speeches of the participants. Second, we developed a set of behavioral codes of the notes. Third, we grouped the codes into categories that describe their relationship both for system behavior and for people’s behavior.

Table 4 Codes and categories extracted from the video analysis

We identified 14 codes that represent interesting phenomena in the data. These codes and categories provide an answer to the RQ1 research question, related to what are the elements of interaction that emerge in the target scenario and how these elements can be categorized. Regarding the system behavior, we identified 3 codes grouped into the feedback category: “digital response on the tabletop display,” “physical response in the object itself,” and “sound.” Digital response on the tabletop display code represents the random image projection of landscapes, simulations, and digital images of physical objects on the tabletop display; physical response in the object itself code represents the feedback type using controllers, sensors, or actuators embedded into physical objects such as turn on LEDs or move a servo motor; and finally, the sound code represents sounds in the environment such as the roars of the dinosaurs and the explosion of the volcano. The results suggest that the installation afforded the participants actions guided by their perception of the different types of system feedback. Moreover, the association the system allowed to be made regarding the different eras and the inhabitants of those eras were constructed by them based on their own actions on the installation (not by being told by someone else), the raising of their own hypotheses, after their own experience of perceiving and acting. For example, a child mentioned to the peers: “the dragonfly does not move its wings because it does not belong to this era.” Another example is when a girl and her brother were interacting with the dinosaurs in the Mesozoic era (each one “playing” one of the dinosaurs). When the eyes of both dinosaurs lit, she commented: “they live together.” The comment suggests that the feedback on the objects as a consequence of their own action on the installation was perceived by her, who concluded that a Tyrannosaurus and a triceratops were species of the same era.

Regarding the people’s behavior, we identify 11 codes grouped into 7 categories: “communication,” “collaboration,” “cooperation,” “conflict,” “exchange,” “return to the exhibition,” and “physical objects manipulation.” In the collaboration category, we identified three codes: The “Give suggestions” code identifies the behavior of giving suggestions or instructions on how to interact with the exhibit. We found parents giving verbal suggestions to their children when they were manipulating physical objects. One mother said to her son: “Do you think the dinosaur was going to survive here?” when a child manipulated a dinosaur in the Archean Era. Then, the child grasped the meteorite that belongs to this era. The “Help in the discovery of interactions” code identifies the behavior of communicating the feedback perceived to other participants in order to help them in the discovery of interactions and feedback. The “Invite people to interact” code identifies the behavior of inviting another person to interact together. For example, a girl who was manipulating the Tyrannosaur invited her mom to grasp the triceratops to experience together the effects of confronting them.

In some groups, we identified conflict cases when more than one child wanted to manipulate the same physical object. For these cases, we defined a “Take physical objects of another child” code grouped in the conflict category. Also, we defined the “Share physical objects” code grouped in the cooperation category and the “Exchange physical objects” code grouped in the exchange category. Children sharing and exchanging physical objects, collaborating with each other in the raising of hypotheses about the (deep time) domain and discovery of responses by talking about feedback perceived to their parents and siblings. One child said to another child: “Now I am the carnivore” (literally “living”-pretending to be the dinosaur). Other behavior identified was grouped in the category of return to the exhibition. There were many cases in which children returned with their parents, school children returned with other school children, or children returning alone, after their first experience with the environment. Thus, the results suggest an enthusiasm of the visitors towards experiencing the system.

In the category of physical objects manipulation, we identified two ways of object manipulation to interact with the exhibit: “Place and manipulate objects on the tabletop display” and “Manipulate objects outside the tabletop display.” The results revealed that participants perceived that to interact with the exhibit they had to place the objects on the tabletop display and move them over observing changes in the environment (feedback responses). The results also revealed that participants perceived the physical affordances of the objects during their interaction with the exhibit. In the case of the dragonfly, both children and adults were captivated by its ability to move its wings. Some children just grasped the dragonfly on the table surface to see its digital representation. Other children manipulated the dragonfly as if it was a real insect, making it fly outside the table surface while it continued to move its wings. Figure 8 shows a group of visitors interacting with TangiTime.

Fig. 8
figure 8

Group of visitors interacting with TangiTime. The image shows a group of visitors interacting with TangiTime. Left: a mom with her son confronting the dinosaurs; right: group of schoolchildren manipulating the dragonfly onto the tabletop display

As for the research question RQ2, related to what are the elements that illustrate the socioenactive experience, we captured the dynamic of the interactions between the digital, the physical and the social components of the environment as shown in Fig. 9. In Fig. 9, the “physical component” (P) includes the tangible tabletop and its elements (T), and the five physical objects (O). The “digital component” (D) includes the software to detect the physical objects, the client application to control the graphic projections (simulations, digital images of physical objects and background images), and the interconnection with the physical objects through an IoT platform. Finally, the “social component” (S) includes people (P1, P2 representing groups) that interact with the exhibit and among themselves.

Fig. 9
figure 9

Dynamic of interactions between the digital, the physical, and the social components of the environment. The image shows the dynamic of interactions between the digital, the physical, and the social components of the environment. The physical component (P) includes the tangible tabletop and its elements (T) and the physical objects (O). The digital component (D) includes the software to detect the physical objects, the client application to control the graphic projections (simulations, digital images of the physical objects and background images), and the interconnection with the physical objects through an IoT platform. Finally, the social component (S) includes people (P1, P2 representing groups) that interact with the exhibit and among themselves

The physical and social coupling is given by people’s action and manipulation of physical objects on the interactive surface or outside it and their perception of physical changes in objects and the environment. An instance of the physical and digital coupling is given when the software detects the objects and their position, then it projects avatars on the tabletop display that moves according to the position and angle of the physical object, and also the physical objects are provided with movement or lights.

In P, we can see that there is a two-way relationship between the tangible tabletop (T) and the physical objects (O). An instance of the bilateral relationship between T and O is the movement of the dragonfly wings when it is detected on the tabletop. Beside that, the loop of O to itself represents that an object can generate feedback responses in another object, such as the case of the dinosaurs.

Discussion

Our research work focused on investigating the socioenactive experience of interaction in the context of the ubiquity of computational systems. For this purpose, we designed and constructed TangiTime, a tangible tabletop with technology-enhanced objects, proposed for an educational exhibit on the abstract and complex concept of “deep time.”

Differently from the regular concept of interactive systems, in which emphasis is put on one individual accomplishing a goal-based task through interaction with a (digital) system [18], the enactive approach is based on concepts such as autonomy, sense-making, embodiment, and experience [17]. TangiTime materialized the embodiment concept through embodied metaphors and interactions with physical objects enacting out knowledge by manipulating them. The autonomy concept is illustrated in TangiTime when it allows people to be autonomous in exploring, grasping and manipulating the physical objects to interact with the exhibit, without a predefined (intended by designers) sequence of actions. The sense-making concept emerges when people, by interacting with TangiTime and with others, perceive the effects of their actions on the environment and its effects leading to new actions and experiences. Thus, by articulating these concepts and recognizing the role of the intersubjective aspects of exploring the installation together with others, TangiTime enables a (socio)enactive experience of interaction. The socioenactive experience emerges from the coupling of three elements in the person-environment interaction: the digital, the physical, and the social as illustrated in Fig. 9. Within this environment, technology is ubiquitous and a part of a two-way feedback system among its parts. Regarding TangiTime, the “socio” element of a socioenactive experience emerges, at least, in two cases: (a) when the interaction experience of one person (P1) is perceived by another (P2) and it affects the sequence of actions of P2. The actions of P2 are “guided” by the perception of actions performed also by P1 in the environment. (b) When the interaction experience of P1 suffers the influence of P2 as a result of P2 saying something or acting in a specific way.

In TangiTime, we used the metaphor of exploration of geological eras to approach the deep time concept. According to the design comparison of TangiTime with related works (see Table 1), TangiTime additionally includes the enacted form of physical embodiment (physicality: manipulated, enacted). The enacted value centers more on acting/enacting out knowledge through the physical action of statements or sequences. Children enacted the geological eras by acting on objects and randomly projected images of different landscapes, by perceiving actions of others on the environment (physical, digital, social), and the system’s response to the (physical, digital, social) actions. In the exploratory study, children enacted the dragonfly as a “real” insect and moved it in the air in a flight (i.e., pretending to be in the insect body, embodying it). In the dinosaur’s case, children also acted as dinosaurs and played to face each other.

TangiTime objects had associated actions and effects, but they were not perceived as controls for interaction; the “computer system” disappears to the background (as proposed in Weiser’s ubiquitous computing). TangiTime objects are designed as literal correspondence to the objects of the domain (cf. Table 1). The literal value refers to the degree to which the physical properties of objects are closely mapped to the learning concepts. Each of our tangible objects represents a natural component or a living organism that belonged to a certain geological era. In the mapping dimension of the design comparison (Table 1), TangiTime was considered as co-located and embedded: co-located when the input and output of the objects are contiguous (avatars or simulations contiguous to the physical objects) and embedded when input and output are embedded in the same object (the dinosaur’s eyes lit green or red and the dragonfly moves their wings). In the transforms dimension, TangiTime implemented the three relations of transformation between physical or digital actions and their corresponding effects in the environment: physical action to physical effect (PPt), physical action to digital effect (PDt), and digital action to physical effect (DPt). The PPt form results when manipulating the dragonfly and dinosaurs, the outputs occur on the same object. The PDt form results when simulations or avatars are projected onto the table while objects are manipulated. The DPt form results when the system randomly projects images of geological eras and the children manipulate the objects according to it. Regarding the mode of play, TangiTime incorporates a collaborative social interaction. Finally, the educational content is situated in a mixed environment: virtual and physical (environment: mixed); nevertheless, we should notice the importance we give to the social dimension within the whole experience of (inter)acting on the environment.

Synthesizing, TangiTime allowed users two ways of object manipulation: (a) just placing and moving objects on the tabletop display and (b) manipulating objects outside the tabletop display and continuing to receive feedback on the physical object itself. All mentioned related works require tangible objects to be placed on the tabletop display to be detected by the software system. In addition to this functionality, TangiTime objects can also be manipulated outside the tabletop display and continue receiving feedback on the objects themselves (e.g., a dragonfly continues moving her wings when outside the tabletop). Additionally, the functionality of communication between objects in TangiTime allowed a physical object to generate feedback response in another physical object.

The affordances of physical objects and tangible tabletop also played an important role in the engagement of the participants. As shown in the results, participants enjoyed flowing the dragonfly or confronting the two dinosaurs. Transforming physical actions into physical effects in the environment enables a concrete (and embodied) experience with elements of the knowledge domain, i.e., the actions happen in the physical environment (not exclusively in a digital environment). In our exploratory literature review, we identified two related work with PPt transform dimension (physical action to physical effect) using the design framework of Melcer and Isbister [25]. Differently from TangiTime, they do not incorporate manipulation outside the tabletop display or communication between objects as made possible with TangiTime.

Overall, contributions to the HCI field and takeaways from the work to researchers and designers can be summarized as follows:

  • Making tangible the experience with an abstract (lacking direct sensory referents) concept represents a design domain which can benefit from enactivist phenomenological theoretical background;

  • The nature of the interaction with tangible and ubiquitous environments should consider the relational domain of the social-physical-digital elements constituting the environment;

  • Embodiment, autonomy, sense-making, and the intersubjective aspects of interaction are part of the socioenactive experience made possible through the proposed system.

Conclusion

The ubiquity of contemporary computational technology brings challenges to the well established theoretical basis of design and evaluation of such systems. In this work, we draw on enactivists basis of cognitive science to investigate the socioenactive experience of interaction in the context of the ubiquity of computational systems, by designing and discussing TangiTime, an educational exhibit to explore the abstract and complex concept of “deep time.”

Emerging technologies as those related to IoT enabled opportunities to explore benefits of joining technology-enhanced objects with TUIs. The results of this study showed that using technology-enhanced objects with tangible tabletops enabled physical actions to be transformed into physical effects in the environment allowing an embodied experience with the knowledge domain in the designed environment. In addition to literature results arguing that tangible tabletops support collaboration and learning, this work raised the socioenactive aspects of the experience with TangiTime.

Moreover, this study suggests that the ubiquity of computational systems, under the basis of the theoretical referencial of phenomenology, potentialized the creation of a socioenactive experience by fostering multimodal perceptions and the engagement of the participants. Understanding the socioenactive experience of interaction with tangible tabletops may help designers in their informed decisions regarding such systems. This study is a part of a series of ongoing efforts towards that better understanding.

Availability of data and materials

All data generated or analyzed during this study are included in this published article.

Declarations

Abbreviations

P:

Physical component

D:

Digital component

S:

Social component

T:

Tangible tabletop

O:

Physical objects

References

  1. Weiser M (1999) The computer for the 21st century. SIGMOBILE Mob Comput Commun Rev 3(3):3–11. https://doi.org/10.1145/329124.329126.

    Article  Google Scholar 

  2. Dourish P (2001) Where the action is: the foundations of embodied interaction. MIT Press, Cambridge, MA, USA.

    Book  Google Scholar 

  3. Suchman LA (1987) Plans and situated actions: the problem of human-machine communication. Cambridge University Press, New York, NY,USA.

    Google Scholar 

  4. Suchman LA (2006) Human-machine reconfigurations: plans and situated actions. Cambridge University Press, New York, NY, USA.

    Book  Google Scholar 

  5. Winograd T, Flores F (1987) Understanding computers and cognition: a new foundation for design. Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA.

    MATH  Google Scholar 

  6. Ishii H, Ullmer B (1997) Tangible bits: towards seamless interfaces between people, bits and atoms In: Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, 234–241. https://doi.org/10.1145/258549.258715.

  7. Ishii H (2008) Tangible bits: beyond pixels In: Proceedings of the 2nd International Conference on Tangible and Embedded Interaction. https://doi.org/10.1145/1347390.1347392.

  8. Dillenbourg P, Evans M (2011) Interactive tabletops in education. Int J Comput-Supported Collab Learn 6(4):491–514. https://doi.org/10.1007/s11412-011-9127-7.

    Article  Google Scholar 

  9. Schneider B, Jermann P, Zufferey G, Dillenbourg P (2011) Tangible user interface design for climate change education in interactive installation art. IEEE Trans Learn Technol 4(3):222–232.

    Article  Google Scholar 

  10. Marshall P (2007) Do tangible interfaces enhance learning? In: Proceedings of the 1st International Conference on Tangible and Embedded Interaction, 163–170. https://doi.org/10.1145/1226969.1227004.

  11. Xie L, Antle AN, Motamedi N (2008) Are tangibles more fun? Comparing children’s enjoyment and engagement using physical, graphical and tangible user interfaces In: Proceedings of the 2nd International Conference on Tangible and Embedded Interaction, 191–198. https://doi.org/10.1145/1347390.1347433.

  12. De Raffaele C, Smith S, Gemikonakli O (2018) An active tangible user interface framework for teaching and learning artificial intelligence In: 23rd International Conference on Intelligent User Interfaces, 535–546. https://doi.org/10.1145/3172944.3172976.

  13. Schneider B, Blikstein P, Mackay W (2012) Combinatorix: a tangible user interface that supports collaborative learning of probabilities In: Proceedings of the 2012 ACM International Conference on Interactive Tabletops and Surfaces, 129–132. https://doi.org/10.1145/2396636.2396656.

  14. Varela FJ, Thompson E, Rosch E (2016) The embodied mind: cognitive science and human experience (revised edition). MIT Press, Cambridge, MA, USA.

    Google Scholar 

  15. Baranauskas MCC (2015) Socio-enactive systems: investigating new dimensions in the design of interaction mediated by information and communication technologies. (FAPESP Tematic Project 2015/16528-0). Unpublished document.

  16. Bruner JS (1966) Toward a theory of instruction. Harvard University Press, USA.

    Google Scholar 

  17. Thompson E, Stapleton M (2009) Making sense of sense-making: reflections on enactive and extended mind theories. Topoi 28(1):23–30. https://doi.org/10.1007/s11245-008-9043-2.

    Article  Google Scholar 

  18. Kaipainen M, Ravaja N, Tikka P, Vuori R, Pugliese R, Rapino M, Takala T (2011) Enactive systems and enactive media: embodied human-machine coupling beyond interfaces. Leonardo 44(5):433–438. https://doi.org/10.1162/LEON_a_00244.

    Article  Google Scholar 

  19. Ma J, Sindorf L, Liao I, Frazier J (2015) Using a tangible versus a multi-touch graphical user interface to support data exploration at a museum exhibit In: Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction, 33–40. https://doi.org/10.1145/2677199.2680555.

  20. Chu JH, Clifton P, Harley D, Pavao J, Mazalek A (2015) Mapping place: supporting cultural learning through a Lukasa-inspired tangible tabletop museum exhibit In: Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction, 261–268. https://doi.org/10.1145/2677199.2680559.

  21. De Bérigny C, Gough P, Faleh M, Woolsey E (2014) Tangible user interface design for climate change education in interactive installation art. Leonardo 47:451–456.

    Article  Google Scholar 

  22. Loparev A, Westendorf L, Flemings M, Cho J, Littrell R, Scholze A, Shaer O (2017) BacPack: exploring the role of tangibles in a museum exhibit for bio-design In: Proceedings of the Eleventh International Conference on Tangible, Embedded, and Embodied Interaction, 111–120. https://doi.org/10.1145/3024969.3025000.

  23. Morita Y, Setozaki N2017. Learning by tangible learning system in science class.

  24. Gubbi J, Buyya R, Marusic S, Palaniswami M (2013) Internet of things (IoT): a vision, architectural elements, and future directions. Futur Gener Comput Syst 29(7):1645–1660. https://doi.org/10.1016/j.future.2013.01.010.

    Article  Google Scholar 

  25. Melcer EF, Isbister K (2016) Bridging the physical divide: a design framework for embodied learning games and simulations In: Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, 2225–2233. https://doi.org/10.1145/2851581.2892455.

  26. Melcer EF (2018) Learning with the body: understanding the design space of embodied educational technology. PhD thesis. New York University, Tandon School of Engineering.

  27. Ens B, Hincapié-Ramos JD, Irani P (2014) Ethereal planes: a design framework for 2D information space in 3D mixed reality environments In: Proceedings of the 2nd ACM symposium on Spatial user interaction, 2–12.

  28. Esteves A (2012) Designing tangible interaction for embodied facilitation In: Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction, 395–396. https://doi.org/10.1145/2148131.2148231.

  29. Antle AN, Bevans A, Tanenbaum J, Seaborn K, Wang S (2010) Futura: design for collaborative learning and game play on a multi-touch digital tabletop In: Proceedings of the Fifth International Conference on Tangible, Embedded, and Embodied Interaction, 93–100. https://doi.org/10.1145/1935701.1935721.

  30. Price S, Jewitt C (2013) A multimodal approach to examining “embodiment” in tangible learning environments In: Proceedings of the 7th International Conference on Tangible, Embedded and Embodied Interaction, 43–50. https://doi.org/10.1145/2460625.2460632.

  31. Mora-Guiard J, Pares N (2014) “Child as the measure of all things”: the body as a referent in designing a museum exhibit to understand the nanoscale In: Proceedings of the 2014 Conference on Interaction Design and Children, 27–36. https://doi.org/10.1145/2593968.2593985.

  32. Antle AN, Wise AF, Hall A, Nowroozi S, Tan P, Warren J, Eckersley R, Fan M (2013) Youtopia: a collaborative, tangible, multi-touch, sustainability learning activity In: Proceedings of the 12th International Conference on Interaction Design and Children, 565–568. https://doi.org/10.1145/2485760.2485866.

  33. Saenz M, Strunk J, Chu SL, Seo JH (2015) Touch wire: interactive tangible electricty game for kids In: Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction, 655–659. https://doi.org/10.1145/2677199.2687912.

  34. Dalrymple GB (2001) The age of the earth in the twentieth century: a problem (mostly) solved. Geol Soc Lond Spec Publ 190:205–221.

    Article  Google Scholar 

  35. Hutton J (1788) X. theory of the earth; or an investigation of the laws observable in the composition, dissolution, and restoration of land upon the globe In: Earth and Environmental Science Transactions of The Royal Society of Edinburgh, 209–304.

  36. Geological Eras Website. https://brasilescola.uol.com.br/geografia/eras-geologicas.html.

  37. Schwanenflugel PJ (1991) Why are abstract concepts hard to understand. Psychol Word Meaning:223–250.

  38. Mendoza YLM, Baranauskas MCC (2019) Enhancing a tangible tabletop with embedded-technology objects for experiencing deep time In: Anais do XXV Workshop de Informática na Escola, 598–607.. SBC, Porto Alegre, RS, Brasil. https://sol.sbc.org.br/index.php/wie/article/view/13208.

    Chapter  Google Scholar 

  39. Kaltenbrunner M, Bencina R (2007) reacTIVision: a computer-vision framework for table-based tangible interaction In: Proceedings of the 1st International Conference on Tangible and Embedded Interaction, 69–74. https://doi.org/10.1145/1226969.1226983.

  40. ReacTIVision Website. http://reactivision.sourceforge.net/.

  41. Kaltenbrunner M, Bovermann T, Bencina R, Costanza E2005. TUIO: a protocol for table-top tangible user interfaces.

  42. Processing Website. https://processing.org/.

  43. Shiftr.io Website. https://shiftr.io/.

  44. MQTT Website. http://mqtt.org/.

  45. Lazar J, Heidi Feng J, Hochheiser H (2017) Research methods in human-computer interaction. ELSEVIER Press, USA.

    Google Scholar 

Download references

Acknowledgements

The authors thank the InterHAD team for their collaboration in different phases of the project.

Funding

This work is supported by the São Paulo Research Foundation (FAPESP) (grant #2015/16528-0), National Council for Scientific and Technological Development (CNPq) (grant #306272/2017-2), and Coordination for the Improvement of Higher Education Personnel (CAPES) (grant #173989/2017). This work is part of a project that was approved by a research ethics committee (CAAE 72413817.3.0000.5404).

Author information

Authors and Affiliations

Authors

Contributions

All authors were involved in design, coordination, and evaluation of the study. All authors read and approved the final version for submission.

Corresponding author

Correspondence to Yusseli Lizeth Méndez Mendoza.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Méndez Mendoza, Y.L., C. Baranauskas, M.C. Designing a tangible tabletop installation and enacting a socioenactive experience with TangiTime. J Braz Comput Soc 27, 9 (2021). https://doi.org/10.1186/s13173-021-00112-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13173-021-00112-y

Keywords