top of page

Descendent 

Exhibitions

A performance for violin, dance and machine learning, 2022. By Peter A C Nelson, Roberto Alonso Trillo and Chen Jie.

The Descendent project began as an artistic collaboration between the Hong Kong dancer Sudhee Liao, Dr Roberto Alonso Trillo (HKBU Music Department  & Augmented Creativity Lab member) and Dr Peter A C Nelson (HKBU Academy of Visual Arts & Augmented Creativity Lab member, where we began experimenting with generating sound from touch-based sensors and visualising motion using an xBox Kinnect.

[Conference Presentation]  

QS Apple 2021

 

November 1-3 2021. Descendent was presented as a keystone interdisciplinary project for current research into art and technology at Hong Kong Baptist University.

We later speculated on whether machine learning might be used to introduce the idea of motion synthesis and artificial dance into the apparatus of the project, and invited computer science researcher Dr Chen Jie (HKBU Computer Science Department) to join the project. On a technical level, we began looking at a number of approaches for synthesising dance motion using music as an input. What we were looking for was a performance system based on a chain of gestural translation - Roberto performs a musical gesture, which is interpreted in dance by Sudhee, but then this same violin gesture is also interpreted by a machine learning system trained on Sudhee's motion, to produce it's own artificial response to Roberto's violin. This places Roberto in a gestural dialogue with two dancers, one real and one artificial, and places Sudhee in a dialogue with Roberto and an artificial version of herself. We also created an electrified dance mat that would produce synthesised violin music when Sudhee touched it with her body. This assemblage of technological feedback loops formed the basis of a performance that explored philosophical concepts in art and technology in a way that could be transparent and engaging for an audience. If a machine learning model, trained on the movements of a dancer creates new gestures in response to musical input, can we say that it is dancing? Obviously the machine learning system cannot consciously enjoy the music or understand it's own output as dance, however even with this crude combination of music, dance and motion synthesis, we wanted to encourage the audience to speculate on the moment when we might consider a machine learning system to be a performer rather than simply a performance tool. When a human dancer is dancing with a synthesized version of herself and a violinist is playing with his own sounds synthesized back to him, who is leading the performance? If such a system is augmented with machine learning, could a performance create the illusion of artificial creativity and agency? The exploration of these questions was spread across every aspect of the project, from choreography to musical composition, digital and physical costume design, digital and physical stage design and our music and gesture synthesis systems. 

bottom of page