Tag Archives: ableton live

Featured Video Play Icon


AV0 is an exploration on human-computer collaboration in the audio visual field, dedicated to those believing in computers as partners in the creative flow.

Book a show Contribute

Why this project

In the computer-aided creative process, whether it’s wireframing a design, programming or producing music – our actions often generates unpredicted results. I began to appreciate and consume these instances not as undesired output but as computer input in my creative flow. This led to a growing desire to replicate this behavior in a live performance, where I could assign the computer a task or part of the piece to direct and consequently influence me, the human performing with it.

This approach sets AV0 apart from today’s audio visual sets, where the performer is usually in control of both audio and video, with the latter often reactive to the first or in support of it. In AV0 the computer is responsible for visuals and decides what to display, its behavior and the duration of each piece, leaving to the performer to decide how to respond.

The piece focuses on the interaction between the two, asking the performer and the viewer basic questions. What will the computer choose to do? How will the performer respond? How are the two playing? Is it chaotic? Is it organized? How does it feel? More organic or artificial?

How it works

Performer and computer work as a duo, the first is in charge of sound, the second of visuals.

When AV0 begins the computer chooses a set of visuals and the duration of the piece, and communicates this information to the performer.

Once the performer starts playing, the countdown begins.

When the time is up, the computer stops all playing sounds, disables the performer input and generates a new piece.


Images and sound are designed to carry no symbolic meaning, allowing the viewer to focus on the interaction between the two players.

Visuals are made of basic geometries, to maintain the visual stimuli contained while still allowing the algorithm to create interesting compositions.

The use of two high contrast tints is a reference to the two players involved: opposite to one another yet contributing for one objective.

Grid system

Visuals are arranged based on a basic grid system of rows and columns that define the layout — a common approach in graphic design.

For each new piece, the computer generates a series of grids where the number of cells defines the maximum number of elements that can be placed.

Grid’s cells also defines initial location of elements, their maximum size and roaming bounds.

Each piece is made of four layer, each with its own grid which are then stacked on top of each to create the final composition.

Parametric approach

With parametric visuals, the grid on which shapes are arranged can be projected on virtually any aspect ratio.

The idea came to me after attending a panel at Ableton’s Loop 2016 where Alfred Darlington’s aka Daedelus, was sharing how he tries to make every live show different from the other, trying to repeat himself as little as possible, but more importantly, he visits the venue before the show and calibrates the set to properly fit the place.

With AV0, whether the projection screen is 3:4, 16:10, 1:1 or 10:1, the visuals can be calibrated to fit the area. Grids are then generated accordingly affecting how shapes will be generated, and positioned, adding to the uniqueness of the performance being executed.

Thank you for visiting. For bookings and inquiries use the form below specifying venue location and dates.

Links and credits

Featured Video Play Icon

Sonic Views

Sonic Views are images generated by ambient sounds. To each value of the waveform’s amplitude is associated a color ranging from black to white. Each color is then applied to a squared cell in a matrix, creating a picture where the ambient recording can be explored at its most granular level, but where sound is left to the spectator to imagine.

C Line Subway Driver Announcing Next Stop at 42nd Street on September 7, 2015 at 8:51am

The output is meant to expose the viewer to the density and complexity of acoustic events constantly surrounding us. While we hear a multitude of sounds, we naturally filter them, focusing on what is more relevant to us at a given moment, and our eyes play a powerful role in this, making us deaf to things that don’t matter. At a survival level, a car honking at us while we cross the street will temporarily call our attention over our friend talking, but in a general sense our brain decides when and what to listen to. These images offer eyes every detail of a recorded sound. With nothing to hear, but so much to look at, our ears have a moment of revenge with our brain.


Images are generated with a Max/MSP patch, which collects the absolute value of the waveform’s amplitude for any given sample and multiplies it by a constant to obtain the output color.

A waveform’s amplitude values, range between a minimum of -1 and a maximum of 1. The illustration above shows the output color for any given value. Zero will give black, 1 will give white and any value in between will give a range from dark to light gray. Values are converted to absolute, hence -1 will still return white and so on for the other values. Colors are printed in squared cells arranged in vertical lines, going from top to bottom and proceeding towards the right.

For explanation purposes the image above shows only 100 cells, if it were generated from a sample, the sound would be only 1/10th of a second long. For the New York Series all recordings were chosen to be 16 seconds long, and captured at a sample rate of 44.100Hz. That means each image has a total of 16 x 44.100 = 705,600 cells.


Recordings length was chosen following two criteria: it had to be long enough to provide, if listened, a good sense of where the recording comes from, and it had to contain a number of samples that would allow to generate a squared image.

This last point required the value to be a power of 2 in order to create an equal number of columns and rows without loosing any data in the printing process.
An audio recording is often referred to as a sample, from the digital process of sampling which consist in capturing a series of values from a given source at a specific rate. A rectangle was a natural choice to offer an agnostic and contained layout. By filling the squared canvas with square cells the outer shape constantly refers to the content that makes it, as a reminder of the nature of the image itself.
The following images offer an closer look at one of the pictures.






The New York Series is a collection generated from six recordings presented as a day in a life in the city. The images capture six moments of a day in New York such as taking the subway, being at the office, going to meeting with a friend, and the last moments before bed.

The office in the morning on July 7, 2016 at 8:51am • Buy Print

Brushing teeth before bed at 27w 71st Street on July 7, 2016 at 11:29pm • Buy Print

Watching the storm from the living room window at 27w 71st Street on July 1, 2016 at 10:10pm • Buy Print

Walking from Canal and Broadway to Lafayette St to meet James on March 10. 2016 at 18:53pm • Buy Print

Newyorkers leaving the city for the weekend on July 1, 2016 at 4:21pm • Buy Print

C Line Subway Driver Announcing Next Stop at 42nd Street on September 7, 2015 at 8:51am • Buy Print

Thank you for reading about this project. If you have questions, want to buy or feature these works in your show, drop me a line using the form below.

Featured Video Play Icon


An audio visual project I started in September 2016, with the objective to explore on stage collaboration between performer and computer. The two work as a duo: the first in charge of sound and the second responsible for deciding what will be the visuals behavior, thus affecting the way the performer will play.


Since inception day I’ve been posting logs on my progress via Instagram stories.
The videos below are a collection of all my updates divided by month. You can see my rather casual beginning, my discoveries and achievements, solid plans changing, my failures, but most importantly how much fun it has been. I hope this can be of inspiration and perhaps a resource for anyone who wants to start a project in this field.

You are welcome to follow my future logs on my Instagram page, enjoy.


  • Friday 16: Testing it all together. I think I’m on the right path.
  • Tuesday 13: Introducing bounds for roaming objects.
  • Wednesday 7: Started to implement a grid system to arrange shapes with some degree of systematic harmony.
  • Tuesday 6: Introducing titles at the beginning of each piece to give the performer a clue on what comes next.
  • Monday 5: Battling with the chicken and egg problem of what to do first: designing sound or visuals?


  • Wednesday 23: Introduced timer functionality to shut off the whole thing.
  • Tuesday 22: Second iteration of the framework is ready to process incoming events. Most of the work will be done in Javascript at this point.
  • Wednesday 16: The seed idea is validated. I’ll proceed in this direction.
  • Monday 7: After attending Ableton Loop2016, I started to entertain myself with the idea of a performance based on improvisation, collaboration with the computer, and proactive visuals.


  • Monday 17: First framework model based on clips and live automations starts to take shape.
  • Thursday 6: Started to create a personal reference library of sketches in Max/MSP and Javascript on GitHub.


  • Wednesday 28: Subject and storyline have been defined.
  • Saturday 17: Decided to start working on a audiovisual performance.

Follow my daily logs on my Instagram story.

Thank you for reading about this project. For bookings and inquiries please use the form below.