top of page

I design, plan and manage creative technology projects. In each of these projects, my hard-skill contribution is usually in 3D modeling, 3D printing,  animating, or game prototyping. My soft-skill contribution is in conceptualizing the project goals, managing the team and writing publications. Here are some of my recent projects. You can scroll through or link directly:

Metabow_pitchdeck_worker (1).jpg
Metabow_pitchdeck_worker (3).jpg
Metabow_pitchdeck_worker (4).jpg
Metabow_pitchdeck_worker (5).jpg
Metabow_pitchdeck_worker (6).jpg
Metabow_pitchdeck_worker (7).jpg
Metabow_pitchdeck_worker (8).jpg
Metabow_pitchdeck_worker (14).jpg
metabow_rotation_tutorial_1.gif
Metabow_pitchdeck_worker (10).jpg
Metabow_pitchdeck_worker (11).jpg
TreeGAN

2. TreeGAN (A machine learning tool for artists)

As part of a successful pitch for a 200K grant, we proposed to develop machine learning tools that allowed artists to work with 3D graphics

Screen Shot 2022-06-29 at 3.40.01 pm.png

I had worked as a 3D artist in animation and 3D printing for a number of years, and was curious to see how machine learning would create 3D objects that could be animated and printed.

We identified a large number of accessible tools for 2D graphics (above).

But few accessible tools for 3D graphics, despite plenty of research having been done in the area (right).

Screen Shot 2022-06-29 at 3.46.53 pm.png

STRATEGY

To select the most appropriate computer science research to convert into a practical. artistic tools, we expanded our team to include a specialized machine learning consultant and a junior computer scientist for implementation.

We identified voxel-based 3D machine learning as the most appropriate and made our own new dataset of 26,000 3D trees to train it on. There are very few conditional datasets of 3D models, Our tree dataset is based on images from over 300 years of art history and can be freely downloaded from our GitHub. It's a fun dataset to use as it has a logical core but with diverse organic geometry.

treeSet.gif
Screen Shot 2022-06-29 at 4.11.59 pm.png
tree_mesh_voxel3.gif

We made publicly available datasets, pre-trained models and pre-formatted notebooks for artists to use on Google Colab.
 
https://github.com/buganart/BUGAN

Our tools have been used to produce a number of exhibitions, 3D printed works and NFTs with partners including:

Screen Shot 2022-06-29 at 4.27.47 pm.png
treeLatent2.gif
Ph.D. Reseach

3. A New Book on Games and Virtual Worlds

OPPORTUNITY

A 3-year funded Ph.D. to research virtual worlds at City University Hong Kong with a research exchange at IT University, Copenhagen.

modtheory_3.gif

KEY QUESTIONS

  1. What are the cultural influences on virtual worlds?

  2. What is the player experience of virtual worlds?

  3. What internal and external economies support virtual worlds?

  4. What forces drive user-generated content?

CONSTRAINTS

1. Three-year fixed duration.
2. Over 10 000 computer games are released annually, limiting qualitative depth for this short study.
3. The results would be presented as a book for students and game designers.

gamesReleased.png
cs-surf-settings.jpeg

PROJECT DESIGN

To balance scope, depth and relevance, I analysed a single game software environment (The Source Engine) and three major titles that use it.

RedoubtGameplay.png

The analysis was based on up-to-date methods from game design, media studies, ludology, digital geography, platform studies and art history. I also made and tested a new game to confirm the parameters of the software.

KEY FINDINGS:

1. The virtual worlds of single-player games link psychological patterns of quest and personal achievement to visual rewards of spatial exploration and landscape design.

20170803143710_2.jpg

2. The virtual worlds of multiplayer games and eSports mimic conventional sports in their spatial and ludological design as well as their use of gambling, virtual currencies and product placement.

LGD_Gaming_at_the_2015_LPL_Summer_Finals.jpg

3. The virtual worlds of multiplayer sandboxes tend towards networks of ‘cultural niche’ servers, where ‘prosumers’ create and consume content in a fragile balance between IP infringement and emerging creative economies.

20180313134414_1.jpg

COMPLETION

1. Findings were presented at major international computer game conferences.
 
2. Findings will be published as a book by Palgrave MacMillan in 2023.
 
3. The prototype game 'Autosave: Redoubt' was exhibited in major museums in Shanghai, Taipei and Hong Kong.

Descendent

4. Descendent (A performance for machine learning & motion capture)

Screen Shot 2022-06-29 at 10.34.03 am.png

As part of an $8M government grant for art and A.I., Roberto Trillo and I were asked to design a pilot performance. 

OPPORTUNITY

Chance favours a prepared mind! Roberto Alonso Trillo,  Sudhee Liao and I had been working on a piece for violin, dance and motion capture in our spare time.

danceRehearsal.gif

OBJECTIVE AND SCOPE

After an initial consultation with the commissioning body, we agreed that our objective was to design a public-facing creative technology project that explored the implications of machine learning.

Short-term deliverable: A public performance ready within 90 days.

Medium-term deliverable: Expand the pilot performance into a new research direction for the $8M  grant.

Screen Shot 2022-06-29 at 12.09.25 pm.png
Screen Shot 2022-06-29 at 2.40.07 pm.png
dance_dance.gif

DESIGN

To meet the short-term deliverable, we designed a performance for dance, violin, motion capture and machine learning motion synthesis.

To meet the long-term deliverable, we set a research agenda for developing a machine learning motion synthesis tool based on a new dataset of an individual dancer.

To redesign our weekend work into a project that fit the new objectives,  interviewed our researchers and identified these congruent specialisations:

CONSULTATION

CHALLENGES

MOCAP performances fall out of calibration quickly.
 
Generating artificial motion is complex, but if you do a good job, your audience might actually appreciate what you are doing.

mocapSudhee.gif

SOLUTIONS

To solve the calibration problem, we designed our choreography in short acts that integrated recalibration positions.
 
To highlight our use of machine learning, we allowed the dancer to switch between live motion capture and synthesised motion.

COMPLETION

Our 90-day deliverable was met and the project presented at the 2021 QS University Rankings Conference.

Our long-term deliverable was met, we established a key research direction for the next 2 years of the sponsor’s Art & AI project. 

NELSON_dance_gif_2.gif
MetaCreativity Lab

5. The MetaCreativity Lab (Design and construction of a cross-functional lab)

Six researchers were hired to generate collaborations in art and technology. They were placed in adjacent offices but there was no shared space or equipment to facilitate exchange and research. 

OPPORTUNITY

We identified an under-utilised common area and a competitive equipment grant that could fund construction and equipment.

Screen Shot 2022-06-29 at 6.58.55 pm.png
lab.gif

I designed a workspace and equipment suite based on the principles I wanted to work within - openness and transparency.

FUNCTIONALITY

To anticipate future cross-functional research in creative technology, we equipped the lab with an Exxact Dual Titan GPU Machine Learning suite, a multichannel surround sound audio mixing suite and a 3D scanning, printing, VR and interaction suite.

Screen Shot 2022-06-29 at 7.53.34 pm.png
Screen Shot 2022-06-29 at 7.59.56 pm.png

This use of a small grant and design principles led to our laboratory becoming a central hub for art and technology research in Hong Kong.

COMPLETION

bottom of page