Category Archives: Projects

Featured Video Play Icon

AV0

AV0 is an exploration on human-computer collaboration in the audio visual field, dedicated to those believing in computers as partners in the creative flow.

Book a show Contribute


Why this project

In the computer-aided creative process, whether it’s wireframing a design, programming or producing music – our actions often generates unpredicted results. I began to appreciate and consume these instances not as undesired output but as computer input in my creative flow. This led to a growing desire to replicate this behavior in a live performance, where I could assign the computer a task or part of the piece to direct and consequently influence me, the human performing with it.

This approach sets AV0 apart from today’s audio visual sets, where the performer is usually in control of both audio and video, with the latter often reactive to the first or in support of it. In AV0 the computer is responsible for visuals and decides what to display, its behavior and the duration of each piece, leaving to the performer to decide how to respond.

The piece focuses on the interaction between the two, asking the performer and the viewer basic questions. What will the computer choose to do? How will the performer respond? How are the two playing? Is it chaotic? Is it organized? How does it feel? More organic or artificial?

How it works

Performer and computer work as a duo, the first is in charge of sound, the second of visuals.

When AV0 begins the computer chooses a set of visuals and the duration of the piece, and communicates this information to the performer.

Once the performer starts playing, the countdown begins.

When the time is up, the computer stops all playing sounds, disables the performer input and generates a new piece.

Style

Images and sound are designed to carry no symbolic meaning, allowing the viewer to focus on the interaction between the two players.

Visuals are made of basic geometries, to maintain the visual stimuli contained while still allowing the algorithm to create interesting compositions.

The use of two high contrast tints is a reference to the two players involved: opposite to one another yet contributing for one objective.

Grid system

Visuals are arranged based on a basic grid system of rows and columns that define the layout — a common approach in graphic design.

For each new piece, the computer generates a series of grids where the number of cells defines the maximum number of elements that can be placed.

Grid’s cells also defines initial location of elements, their maximum size and roaming bounds.

Each piece is made of four layer, each with its own grid which are then stacked on top of each to create the final composition.

Parametric approach

With parametric visuals, the grid on which shapes are arranged can be projected on virtually any aspect ratio.

The idea came to me after attending a panel at Ableton’s Loop 2016 where Alfred Darlington’s aka Daedelus, was sharing how he tries to make every live show different from the other, trying to repeat himself as little as possible, but more importantly, he visits the venue before the show and calibrates the set to properly fit the place.

With AV0, whether the projection screen is 3:4, 16:10, 1:1 or 10:1, the visuals can be calibrated to fit the area. Grids are then generated accordingly affecting how shapes will be generated, and positioned, adding to the uniqueness of the performance being executed.


Thank you for visiting. For bookings and inquiries use the form below specifying venue location and dates.


Links and credits

Featured Video Play Icon

Sound Matters

Sound Matters is a series of digital objects molded by sampled ambient sounds, that offers a look into the complexity of omnipresent events that surround us.

 

The artwork is an exploration triggered by the desire to rediscover and repurpose sounds we take for granted and often disregard; and converting them into images that aim to trigger in others the same rediscovery feeling. Each piece starts from ambient sound we sampled in each location. We then process each recording to extrapolate its abundance of details — from thin air to an element that did not exist before, yet entirely influenced by something we are constantly immersed in.

Sound Matters is about contemplating the richness of ambient sound. A precious element, for you to explore with your eyes.


The Process

Once the sampling of the sound is done, the recording is processed with a Max/MSP patch that generates squared images from level values of the audio samples. The resulting images are taken then on Cinema 4D as procedural texture for the base stone 3D model.

All recordings are done by the artists with a Zoom H4N at 44.100 Hz and cut to a 25” length. Samples are then processed with a Max/MSP patch to create squared texture images for Cinema 4D.

The patch collects the absolute value of the waveform amplitude for any given sample and multiplies it by a constant to obtain the output color which will range from black if amplitude is 0, to white if 1.

All generated white levels are then printed in squared cells arranged in vertical lines, going from top to bottom and proceeding towards the right creating the final texture image consisting in 1,102,500 values.

In Cinema 4D the MAX/MSP output are applied as a procedural texture over a globular model. The cells are used as height map to displace the actual geometric position of points over the textured surface. The primary  mesh is physically displaced by the waveform amplitude and the highest values are adopted as falloff map and marked by a different shader.

 

Why

Sound Matters is a tribute to sounds, those that are lost, the ones we pay no attention to, or the ones simply ignored.

The project presents few moments in time stored in a physical shape, creating a medium made by time and space, which offers an alternative way to travel back to those sounds. This conversion, from intangible into a shape with strong physical qualities and presence, aims to bring awareness to the richness of common ambient sound.

Each piece offers a unique visualization. These stones are a landscape that you can explore with your eyes. They offer a unique look inside the sound captured in that place and yet, somehow they themselves are the place. Sampled sounds presented as samples of matter.

 

The lost sounds

We are constantly surrounded by sounds. An environmental surround sound you might say.

The sound of Cannon beach — 45.886162, -123.966319 14:35 18°C

The sound of a departing freight train — 47.616291, -122.357194 11:40 22.8°C

There are sounds we hear, say when a loved one is talking to you. While for others sounds, like say the noise of traffic, we may notice little. And truly most sounds pass by us and through us completely unnoticed by our brain.

The sound of water by the piers — 47.616555, -122.357078 11:41 22.8°C

The sound of birds in the park — 47.631230, -122.315825 14:35 22.8°C

However, our ears hear it all. It is our brain that selects what is relevant. There are many practical reasons for this, and they have roots in our survival instinct; but on a physical level, our eyes play a significant role in the filtering process. We rely a lot on our vision. Our eyes, always keep us focused on the subject of what we are doing, and we therefore pay little or no attention to any environmental sound that has not a direct impact on our current activity.

The sound of the ocean shore — 13.194496, -59.641184 18:44 29°C

The sound of the neighborhood — 40.776229, -73.977502 19:36 11°C

But if we record something, and listen to it with our eyes closed, it is like being teleported to the place where the sound came from. Our ears suggest to us a world to be imagined, and in many instances, elements you didn’t notice when recording introduce themselves for the first time.

The sound of a subway ride — 40.757172, -73.989733 8:51 6.7°C

The sound of a hidden alley — 40.718334, -74.002112 18:53 22°C


Credits

This project was made with Marino Capitanio for Super Symmetry

 

Featured Video Play Icon

Sonic Views

Sonic Views are images generated by ambient sounds. To each value of the waveform’s amplitude is associated a color ranging from black to white. Each color is then applied to a squared cell in a matrix, creating a picture where the ambient recording can be explored at its most granular level, but where sound is left to the spectator to imagine.

C Line Subway Driver Announcing Next Stop at 42nd Street on September 7, 2015 at 8:51am

The output is meant to expose the viewer to the density and complexity of acoustic events constantly surrounding us. While we hear a multitude of sounds, we naturally filter them, focusing on what is more relevant to us at a given moment, and our eyes play a powerful role in this, making us deaf to things that don’t matter. At a survival level, a car honking at us while we cross the street will temporarily call our attention over our friend talking, but in a general sense our brain decides when and what to listen to. These images offer eyes every detail of a recorded sound. With nothing to hear, but so much to look at, our ears have a moment of revenge with our brain.

PROCESS

Images are generated with a Max/MSP patch, which collects the absolute value of the waveform’s amplitude for any given sample and multiplies it by a constant to obtain the output color.

A waveform’s amplitude values, range between a minimum of -1 and a maximum of 1. The illustration above shows the output color for any given value. Zero will give black, 1 will give white and any value in between will give a range from dark to light gray. Values are converted to absolute, hence -1 will still return white and so on for the other values. Colors are printed in squared cells arranged in vertical lines, going from top to bottom and proceeding towards the right.

For explanation purposes the image above shows only 100 cells, if it were generated from a sample, the sound would be only 1/10th of a second long. For the New York Series all recordings were chosen to be 16 seconds long, and captured at a sample rate of 44.100Hz. That means each image has a total of 16 x 44.100 = 705,600 cells.

FORM

Recordings length was chosen following two criteria: it had to be long enough to provide, if listened, a good sense of where the recording comes from, and it had to contain a number of samples that would allow to generate a squared image.

This last point required the value to be a power of 2 in order to create an equal number of columns and rows without loosing any data in the printing process.
An audio recording is often referred to as a sample, from the digital process of sampling which consist in capturing a series of values from a given source at a specific rate. A rectangle was a natural choice to offer an agnostic and contained layout. By filling the squared canvas with square cells the outer shape constantly refers to the content that makes it, as a reminder of the nature of the image itself.
The following images offer an closer look at one of the pictures.

200%

400%

800%

3200%


NEW YORK SERIES

The New York Series is a collection generated from six recordings presented as a day in a life in the city. The images capture six moments of a day in New York such as taking the subway, being at the office, going to meeting with a friend, and the last moments before bed.

The office in the morning on July 7, 2016 at 8:51am • Buy Print

Brushing teeth before bed at 27w 71st Street on July 7, 2016 at 11:29pm • Buy Print

Watching the storm from the living room window at 27w 71st Street on July 1, 2016 at 10:10pm • Buy Print

Walking from Canal and Broadway to Lafayette St to meet James on March 10. 2016 at 18:53pm • Buy Print

Newyorkers leaving the city for the weekend on July 1, 2016 at 4:21pm • Buy Print

C Line Subway Driver Announcing Next Stop at 42nd Street on September 7, 2015 at 8:51am • Buy Print


Thank you for reading about this project. If you have questions, want to buy or feature these works in your show, drop me a line using the form below.


Featured Video Play Icon

AV0

An audio visual project I started in September 2016, with the objective to explore on stage collaboration between performer and computer. The two work as a duo: the first in charge of sound and the second responsible for deciding what will be the visuals behavior, thus affecting the way the performer will play.

FOLLOW THE PROJECT AS IT TAKES SHAPE

Since inception day I’ve been posting logs on my progress via Instagram stories.
The videos below are a collection of all my updates divided by month. You can see my rather casual beginning, my discoveries and achievements, solid plans changing, my failures, but most importantly how much fun it has been. I hope this can be of inspiration and perhaps a resource for anyone who wants to start a project in this field.

You are welcome to follow my future logs on my Instagram page, enjoy.

DECEMBER MILESTONES

  • Friday 16: Testing it all together. I think I’m on the right path.
  • Tuesday 13: Introducing bounds for roaming objects.
  • Wednesday 7: Started to implement a grid system to arrange shapes with some degree of systematic harmony.
  • Tuesday 6: Introducing titles at the beginning of each piece to give the performer a clue on what comes next.
  • Monday 5: Battling with the chicken and egg problem of what to do first: designing sound or visuals?

NOVEMBER MILESTONES

  • Wednesday 23: Introduced timer functionality to shut off the whole thing.
  • Tuesday 22: Second iteration of the framework is ready to process incoming events. Most of the work will be done in Javascript at this point.
  • Wednesday 16: The seed idea is validated. I’ll proceed in this direction.
  • Monday 7: After attending Ableton Loop2016, I started to entertain myself with the idea of a performance based on improvisation, collaboration with the computer, and proactive visuals.

OCTOBER MILESTONES

  • Monday 17: First framework model based on clips and live automations starts to take shape.
  • Thursday 6: Started to create a personal reference library of sketches in Max/MSP and Javascript on GitHub.

SEPTEMBER MILESTONES

  • Wednesday 28: Subject and storyline have been defined.
  • Saturday 17: Decided to start working on a audiovisual performance.

Follow my daily logs on my Instagram story.


Thank you for reading about this project. For bookings and inquiries please use the form below.


Featured Video Play Icon

OTOH

OTOH is a plug and play hardware interface for manual beat slicing of audio samples.

The technique of slicing a beat into small pieces and then rearrange them to create new beats has shaped the music industry since the 70’s. New technologies allowed to make it easier and more affordable, but at the time this project started (2008) the industry had never explored interfaces that go beyond a grid of buttons or computer software.

Moreover a certain degree of setup and editing was always necessary, which almost sounds (pun intended) contradictory with the analog qualities that often these processed beats carry with them. There was no, plug and play, but rather, plug sit down, edit, program, have-the-machine-play — especially in the case of drum and bass.

I wanted to create a plug and play instrument that would free up the producer from programming a machine and allow to manipulate the sample with their hands.

INTERFACE

In designing the hardware I wanted to give a shape to the matter it would have dealt with. Thus I designed a way to represent the audio sample and used is as a foundation to give shape to the instrument itself.

To unlock the process I approached the problem in a basic way, defining the most obvious properties of a sample like, beginning, end, length or frequency, and sketched different visual representation for each of them. Fast forwarding the process a little I then put the best solutions together and got to the basic circular representation of the audio sample.
The goal for the interface design was to create an instrument that, when played, it would provide a feeling as close as possible to being manipulating and touching the actual audio sample. Informed by a series of user testing done with a basic working prototype I decided to design a product which shape would start from the representation of the matter being manipulated: the audio sample. Fast forwarding the process a bit I, here is how I solved the representation of the sample.

If we take a drumbeat’s waveform, it may very much look something like this

I then divided the samples in 64 parts and assigned to each a value going form 0 to 4, according to the highest value of the waveform in each region.

What I ended up with at this point were most of the time symmetric visualizations leading me to decide to keep only positive values. The result was a low resolution representation of the waveform, enough to provide a visual clue to the performer on which piece was just being played.

The decision of giving it a circular shape was mostly affected by the progress bar behavior when looping a part that starts towards the end of the sample and ends at it’s the beginning. The image below shows how in a horizontal representation the looped region will cause the progress bar to jump between end and beginning of the sample. On the right, the circular shape allows to loop the same region without the disturbing jump.
The 90 degrees corner was introduced to be able to allocate sliders and other controllers, as well as a way to indicate where the sample beginning point is.

ROLE

The design and development of OTOH included a wide range of activities many of which were way beyond my skill set as a designer. That includes:

  • Designed and implemented software in Max/MSP
  • Designed and manufactured the printed circuit board
  • Designed pads’ keyboard
  • Designed pads molding and manually molded component

AWARDS / RECOGNITIONS

CREDITS

Giovanni Cappellotto for rewriting the firmware, Francesco Fraioli for the great help during and after thesis, Vilson Vieira for the support during the development.


Thank you for reading about this project. If you have questions, want to collaborate on a project, or just want to connect, drop me a line using the form below.