Fabian Campuzano


AI Prototype of a Virtual Musician for Improvisation and Ensemble

Music has always been a profoundly human expression, a reflection of our emotions, thoughts, and experiences. But what if we could create something that not only mimics that humanity, but reinterprets it from an entirely different perspective? What if we could bring to life an artificial entity that, like us, experiences a cycle of learning, growth, and transformation?

In this work, I explore the possibility of an artificial intelligence prototype, developed by me using Max/MSP and based on Markov Chains, that evolves through real-time interaction with the sound signal of a performer. This AI is conceived not as a finished product, but as a being in constant development, one that begins life clumsily, barely able to walk or speak, but that gradually acquires the ability to interpret, respond to, and perform music using its own emerging language.

This system acts as a mirror: not only reflecting the musician’s input, but also revealing aspects of ourselves we may not always perceive. It becomes an inseparable companion, one we do not choose, but with whom we learn to coexist, gradually adapting to each other. The AI is not intended as a replacement for the human performer, but rather as a collaborative partner: one that learns, reacts, and evolves in dialogue with the musician.

By integrating real-time analysis into the temporal nature of music, this project introduces a virtual musician. It listens, processes, and generates phrases that reflect and reframe the performer’s input. In the end, this collaboration becomes more than the sum of its parts: a fleeting moment in which human and machine meet, learn from each other, and create something neither could achieve alone. A new form of artistic expression where the boundaries between creator and creation dissolve, and the future of musical dialogue begins to take shape.

The following section includes two videos and one audio example, accessible through the buttons below. The first presents an explanation of how the system works; the second offers an example within a live performance context; and the third is an audio-only example that explores a more advanced level of interaction using a deeper Markov Chain implementation.