Digital Movement Essays in Motion Technology and Performance

I was very honoured to be able to contribute with an essay to this fantastic book, alongside such a fantastic group of authors.

My chapter is called, “I-CARE-US: Flying Robots and Human-Robot Interaction in Digital Performance” and focus on the emergent exploration of drone aesthetics in Digital Performance, tracing it´s origins in Futurist Aerial Theatre and reflecting on my own artistic practice.

I think I will not be breaking any copywriting law by sharing an excerpt:

The Futurist aerial theatre revisited: past and current perspectives on aerial robotic movement

Fernando Nabais

Introduction

 The impulse to create artificial creatures has been manifested throughout the history of human civilization in social, religious and artistic expressions. The ancient Greek story of Galatea – a statue brought to life by the goddess Aphrodite – or the Jewish legend of the Golem, a speech-less anthropoid made of clay by humans, are just some of the examples of this human fascination with the creation of life and its representations. This pursuit has materialized in both physical and virtual forms. From the millenary shadow play to the 18th-century phantasmagoria, and through techniques such as the famous Pepper´s Ghost[1], man has attempted to give life to the unseen. These constructs served many purposes, to impress and sometimes to enhance the narrative power of storytelling, to induce fear in audiences or merely to entertain a crowd.

Alongside these attempts at creating life in virtual forms, humankind has also tried to recreate it in tangible, physical forms. Steve Dixon (Dixon, 2007, 281-284), traces the “long historical lineage” of robot artworks and performances back to millenary forms of automata. These mechanical anthropomorphic figures capable of movement date back to the third century BCE. Dixon mentions Joseph Needham’s description of “a vast array of mechanical figures, animals, birds, and fishes constructed in ancient China, as well as mechanized chariots and flying automaton that he dates circa 380 BC” (Dixon, 2007, 282). Each new stage of technological development in human history generated new attempts at creating ever more sophisticated forms and concepts of artificial beings. The technology communities have devoted their efforts to solving the many mechanical and control problems involved in developing machines that present effective movement and behaviour. With the permanently increasing computer power, these machines are also gaining cognitive abilities capable of making decisions and collaborating with humans.

In parallel, “robotic artists with their work have been questioning our perception of science and technology as well as its influence on society, staging robots in typical human analogies and situations.” (Ghedini and Bergamasco, 2010, 4).

In areas like social robotics, this mutual influence of science and art is increasingly evident and mutually beneficial.

[1] “Pepper´s Ghost uses a large pane of plate glass and carefully controlled illumination to allow audience members to view the reflections of hidden performers alongside performers who are seen directly on stage… The effect became wildly popular among Victorian audiences and was used in lectures, traditional full-length theatre pieces, novelty presentations and touring fairground exhibitions during the latter part of the nineteenth and the beginning of the twentieth centuries.” (Reilly, 2013, 198)

Digital Movement – Palgrave Macmillan UK

9781137430403

VicTour, Virtual Interactive Character Tour Guide

VicTour is YLabs ‘s final proof-of-concept resulting from its latest software research project CHAMELEON. During the development process the project generated other examples already published such as the “mother of all depth sensing demos”, long before the whole Kinect frenzy (http://www.youtube.com/watch?v=qXcIZ1…). This depth sensing demo was distinguished with the first Auggie award (the Augmented Reality Oscars) at the Augmented Reality Event 2010.
CHAMELEON emerged from the strong belief that the research and development of next-generation intelligent interaction devices is expected to rely on integrative efforts across several research fields. The main objective of this project was therefore to explore and implement architectures and practical design methodologies for embodied intelligent interaction, with a focus on affective and cognitive computation models, as well as autonomous adaptation and learning of system components and parameters over dynamic multi-modal and multi-user environments. More specifically, research focused on:
• Situated cognition and human action models to support development of natural interaction systems;
• Embodied agent architectures suitable to the scalable integration of learning and affective computing algorithms;
• Unsupervised, reinforcement and evolutionary learning techniques for autonomous agents in the context of multi-modal natural interaction installations;
• Affective computing techniques for the design of emotion-based agents capable of interacting with users in a natural and believable way;
• Data mining techniques capable of analyzing and extracting high-level interaction trends and user features to assist in the self-adaptation of interactive systems

VicTour was created with the intention of validating the CHAMELEON project ‘s entire set of achievements . It inherited some of YVision´s previously developed augmented reality functionalities like depth extraction with 2D cameras, collision detection, dynamic occlusions, etc, and materialized into this intelligent and empathetic virtual character that now lives in YDreams’ showroom, and is partly responsible for presenting it to our visitors.
This application encompasses many of the actual computational trends like affective computing, distributed computation, augmented reality, ambient intelligence or smart surroundings, and opens up a large spectrum of possible commercial applications that may be explored in the near future.

YDreams’ Augmented Reality experience with depth-sensing camera

YDreams augmented-reality platform supports depth-sensing cameras. These types of cameras make it possible to pinpoint the 3D position of the users (it supports multiple users) and objects in the environment, allowing for a true marker less augmented reality experience. The virtual objects added in the video stream react to the user as well as the real objects´ positions and movements.

Unlike similar applications that use 2D cameras, this one is not sensitive to light changes, camera movements and noisy backgrounds. This is a simple demo to exemplify the user experience.