Category: Sound Objects

Art comprised of sounding physical objects

Three-Sense Windows

interactive electroacoustic objects
 
steel plate, acrylic, photographic transfer, touch-sensitive electronics and audio
 
12" x 12" (each)

Denver, CO, 2015

CONCEPTUAL NOTE:

"A window is the moment when the bells ring through."  (after Hölderlin)

TECHNICAL NOTE:

A series of four objects, which confront the viewer through three modalities of experience: visual, tactile, and aural. Each object consists of a steel plate, layered with acrylic paint over a  photographic transfer. Should the viewer touch the object, by placing a hand on the steel plate, the object will turn into an aural window/speaker allowing the listener to hear pre-composed audio. The audio is different for each panel— continuously looping it's respective ten-minute-long soundtrack. 

No-Place, Inconsistently

built object-network and generative sound installation
(at least) 12' x 4' x 3'
Boulder, CO, 2015
Cain Czopek, Photography & Image Transfers

This multi-object installation, involving visual, physical, and sonic materials, treats the representation of place like a slot machine, where image, material, and sound slide against each other. Each piece consists of a photographic print transferred onto the surface of an unrelated physical object. The objects themselves further serve as resonating surfaces— as loudspeakers the objects give voice to soundscape recordings. The sounds eventually move from one object to another following algorithmic spatial trajectories. While all photographs, materials, and sounds were taken from specific sites along the Front Range, the result the artists' reconstructions keeps any particular notion of place up in the air. 

Mildly Sympathetic Conversationalist

interactive electroacoustic object with realtime music notation
3.5' x 4' x 2'

Ormond Beach, FL, 2013

Technical Note: 

Microphones attached to the top of the guitar stand provide audio input to a generative sound program running in SuperCollider. This program determines a range of sound synthesis parameters by analyzing the audio input, using that data to update the trajectories of (otherwise) independent algorithmic processes, and then triggering sound generation and output. The sound generated by the computer is reinforced using two tactile transducers (HiWave HIAX25C-8/HS 8-ohm exciters) mounted to the soundboard of the acoustic guitar. In this way, the guitar itself serves as the resonating body for all electronic sounds. The pitches being played by the guitar are also being notated on a computer monitor adjacent to the guitar. This is accomplished by sending OSC messages from the sound synthesis program (SuperCollider) to an application drawing the notes in realtime (as they occur). The notation program was written in Processing.

Conceptual Note: 

The guitar's status as a whole, or fully constituted art object is undermined by its own relation to the context of presentation and modes of viewer/listener access. The guitar is not completely anything; it is not a physical art object, nor a piece of music in and of itself. It is not solely an interactive electroacoustic toy, nor is it a device for musical transcription, and so on. Any one functional determination regarding its being is revealed to be unavoidably incomplete. The work is titled and a (purposefully vague) instruction appears on the gallery pedestal as well, which reads: "Touching Allowed." It is presented as necessarily being in relation to Art given its gallery setting, but the work undermines that very same necessity, by presenting an ontologically fractured nonobjective art. It is not 'really' for our visual consideration, nor is it 'really' a piece of concert music or an instrument to perform upon; it is nothing but our ability to encounter its materiality differently.

Only if you’re there (I’ll meet you there)

discoverable, immersive sound installation
site-specific (~600 square feet)

Hopkins Center for the Arts, Dartmouth College, Hanover, NH, 2007

CONCEPTUAL NOTE:

When we pass someone while walking down a hallway, we incur fleeting interactions full of complex social evaluation— assesments of personality, sexuality, and socio-economic status. These off-the-cuff evaluations are unfounded yet automatic, as subjective impressions inform our quick construction of the life of a stranger.

The idea behind Only if you're there (I'll meet you there) is an attempt to separate biased and confounding evaluations from the simple act of a chance encounter, of two travelers passing each other along an interior pathway. Is it possible for such a simple event to be devoid of social (mis)evaluation and schematic judgment? By creating a musical encounter between two spatially separated individuals, this installation explores these distinctions. The musical system constructs an indirect encounter, occurring within a disembodied aural space. It exists along the pathway, yet is not confined or perhaps even salient to its subjects.

As a single pedestrian interacts with one of the work's two sonar sensor systems, her distance from the sensor array controls the amplitude of ambient inharmonic sound. The interaction of only one pedestrian with the installation enables the musical system to become discoverable. Yet ultimately, this interaction just serves as a baseline to facilitate perceptual comparison to the musical effect of simultaneous interaction occurring at both spatially separated sonar sensor sites. When there are two individuals walking through each sensor site simultaneously, the system cross fades the ambient inharmonic sound into clear harmonic tones. The frequencies and amplitudes of these tones are individually controlled by the relative movements of each person's distance to the given sensor array.

The design of this musical installation maintains the normalcy, nuance, and chance that define each passing encounter of pedestrian traffic. The physical discontinuity between two simultaneous points of audience interaction with the musical system creates the possibility for a virtual, aural space for interaction to emerge. Here, spatially separated individuals interact through a musical intermediary. This human encounter is devoid of social evaluation; there are no fractions of conversations to take into account, there are no fashion trends to critique, and there is no body language to interpret. The installation confronts an individual's ability to search for, parse, and even project the humanity existing within the sound. The audience feels the music by experiencing the subtle interaction of sound and space. Upon discovering the installation's sound, like fragments of speech or a person's clothing, it has a desire to be evaluated and attributed to a particular social dynamic: a composer's intervention, the movements of two pedestrians passing one another, or even one's own projection of musicality onto the aural space.

TECHNICAL NOTE:

Hardware Components:

• 2 AVRLinx V1.1 AVR/Radio Board (Procyon Engineering)

• 6 MaxBotix LV-MaxSonar-EZ1 ultrasound range finders

• 1 ENC28J60-H 10Mbit Ethernet-Interface breakout board

Sonar Arrays:

Three Maxbotix EZ-1 ultrasound range finders are used for each array. For each array, the sensors' readings are sampled by an Atmega32 AVR Microcontroller with a CPU crystal running at 14.745 mhz. The microcontroller chips are integrated into an AVRLinx Radio Board (v1.0) (developed by Pascal Stang). Radio Board A was programmed to sample its three ultrasound sensors and then transmit the readings wirelessly via radio frequency to Radio Board B. Radio Board B is programmed to sample its three ultrasound sensors and integrate the readings with Radio Board A's wirelessly received readings. Radio Board B then sends all data into a Max/MSP patch running on a laptop computer via Open Sound Control messages (sent through an ENC28J60-H Ethernet header board).

Computer Processing:

A Max/MSP patch serves as the control interface for the sonar sensors. Sensor data is mapped within the patch to control the amplitude of inharmonic sound. The inharmonic sound is derived from processed sound samples. These samples were processed ahead of time using spectral filtering and delay software that I had previously developed. Inharmonic sound is output when only one of the sensor arrays demonstrates fluctuations in its distance readings, indicating movement. If movement was simultaneously detected by each sensor array, then the program crossfades the sound output, from processed, inharmonic samples to harmonic triangle waves. The sensor data then controls the amplitude level of the triangle waves. The frequencies of the tones, for each sensor array, were stacked Pythagorean 5ths. Sensor array A and B are separated in frequency by a Pythagorean 4th.

• Sensor Array A: 466.67 hz, 700 hz, and 1050 hz.

• Sensor Array B: 350 hz, 525 hz, and 787.5 hz. 

Parabolic Loudspeakers:

Six parabolic loudspeakers are used to play the computer-generated sound. Three speakers are used per site. Speaker placement coincides with the direction in which each of the sonar sensors are pointed. The parabolic speakers were crafted by hand, using some prefabricated elements. Each speaker's parabolic reflector is 18 inches in diameter. The speakers have 5 watt power and 8 ohm resistance drivers. Both the speaker cabinets and the frame holding the parabolic reflectors are made of wood. The speakers are powered by mono 7 watt amplifiers, which I assembled.

Installation Space:

Only if you're there (I'll meet you there) should be installed along an interior pathway, such as at two ends of an interior hallway (approximately 50 yards apart). The ultrasound sensor arrays should be positioned such that each individual sensor can detect movement from a different direction.

I think I know you (you think I don’t)

discoverable, immersive sound installation
8-channel Parabolic Speaker Array
 
site-specific (~600 square feet)
 
Hood Museum of Art, Dartmouth College, Hanover, NH, 2007

Using a custom designed and constructed 8-channel, parabolic speaker system, this piece grapples with the possibilities of soundscape composition placed within an already rich sonic environment. Instead of using field recordings of environmental sounds, or even recordings of the environment where the composition was ultimately situated, the source material was derived from samples taken from sound effects libraries. Using these samples as the source material, a composition was made that focused on layering and juxtaposing sounds from each of the groupings in sonically rich ways— ways that might reinforce alignments and misalignments when heard in relation to the real sounds of the site. Using a Max/MSP patch, the composition’s 8 audio tracks were continuously re-distributed among eight speakers by randomly selecting from a predefined list of channel configurations. By diffusing the soundscape composition across the parabolic system on site, the piece became “discoverable” by listeners, and the sounds could possibly be misidentified as being of the site.