Years after the singularity event...

... and the resulting moratorium on all A.I. research, you are called to participate in a specially authorized study, to enter the defunct Machine Learning and Movement Lab (ML2), where a hastily abandoned creation, the Artificial Movement Intelligence, or AMI (AH-mee) waits in solitude to fulfill its final directive from the mysterious Dr. F.N. Stein:

// dance with me //

A(m)I(?) is an immersive, participatory, movement-driven exploration of collaboration among humanity and machines, asking "What makes a good dance partner, what makes a good person, and can a machine ever be either one?"

Click here to download project description pdf

Scroll down for work-in-progress footage and some technology details!

Early tests of AMI’s text instructions. AMI guides participants through a series of movement exercises as it “gathers data", seeking to teach itself to dance.

A(m)I(?), early tests of systems and programming. Networking 8 iMacs allows for 16-channel spatial sound over the Mac speakers, as well as use of the 8 displays as one continuous video surface. Text-to-speech chosen at random from ‘Frankenstein’ drives audio-reactive flashes of light, and drives the hairs on my neck right into into the air. QLab-based audio and video playback (via NDI), audio-to-visual reactivity (via OSC), and TTS generation (via AppleScript). Max-based audio spatialization (via IRCAM SPAT) and camera-to-audio reactivity (via Jitter/Vizzie).

Audio on! Basic testing of audio in reaction to movement, along with visual feedback.