For each day in the month of November, I am creating a new experimental audio-visual piece. I will be posting them along with an explanation of the methods I used to create them. I am aiming to use a variety of techniques and processes, creating these compositional “sketches” which may or may not come out looking finished, and which I may or may not feel satisfied with; I will publish them either way. This practice is in the spirit of “experimenting in public,” and engaging a rapidfire production style, where a piece is finished within hours of its conception, and a new one is started immediately thereafter. I hope this will expose a different side of my practice than you might normally see, and I believe it will help me work on some of my habits of overanalysis, and sitting on things for too long.


Created in collaboration with Andrei Jay. This video is yet another excerpt from our sessions with no-input feedback through a long chain of video mixers. In this video, multiple MX-50s are used to generate the oblique outlined squares and cubes. Their colors and shapes, and the scape in which they reside, are degraded along their infinite journey through the loop. An additional, unseen inverted channel of feedback guides their semi-random, flickering passage through the environment. Stochastic choreography and noise patterns emerge as they execute frantic procedures according to invisible rules. At times, near the end, as their numbers increase, they merge to form a single textural mass. The sound was recorded in a separate, earlier session, in which I was triggering notes in an arpeggiator and Andrei was performing modulations to the synth and a granular processor. I applied further processing to the recording during the editing process, resulting in a 20 minute sound piece, part of which was excerpted as the soundtrack for this piece.


This video is excerpted from the same session of experiments in collaboration with Andrei Jay. It is especially difficult to explain how this happened, as it was recorded hours into changing and tweaking settings across a number of mixers. The main structure of the video patch at this point was a no-input feedback loop passing through three MX-50s, an AVE-5, two V4s, two Simas, and two Videonics mixers. The red and blue liquid parts are formed by the feedback loop passing through relatively little processing — it is possible to see the instability caused by the extremely long chain at times. I believe that a processed version of the feed, including horizontal and vertical flips, was being faded into the loop by one the Videonics, to create the tropical coral-reef formations that emerge. There was probably more occurring that I was not aware of; some shapes suggest a wipe pattern. During this part of the recording I was controlling the video and Andrei was playing the sound. I did some color processing to the video in Premiere as well.


This video is a continuation of the last one, Morphogenesis I, in collaboration with Andrei Jay. It uses the same patch described previously. The camcorder has zoomed in, bringing us inside the emergent structures. By manually shifting the clip level of one of the keys, I was able to transform these structures from lines to forms that appear to have volume, then back again. At the very end, the lens zooms back out, bringing us back to a skeletal version of the shape that appears in the last piece.


This video is also the result of my collaborative experiments with Andrei Jay. It is excerpted from the same session with all of my video mixers involved at once. At this point, a camcorder became part of the loop, and the signal path branched off at another point in the loop. Two different speed “strobe” effects were applied before they re-merged through luma-keying. The patch also made use of the “luma invert” function on the Videonics MX-pro mixer, as well as horizontal-flip which created the quasi-symmetry. The keying edges, plus the “downstream key” effect from one of the MX-50s, created the shimmering contours around the bright areas. The forms emerging from the top left and right corners were from on-screen text from the camcorder, which I cropped out in Premiere. These “seeds” caused the feedback to take on an unusual shape, and the contoured edges, combined with a defocused lens, caused the shape to chaotically morph and form strange cavities and hollows. The sound was created from another music jam session with Andrei, which I then applied some processing to in Ableton.


This video is an excerpt from some collaborative experiments by me and my friend Andrei Jay during his recent visit to my studio. Our explorations began with a goal to plug in a single no-input feedback loop using all of ten of the video mixers I currently own. Because the setup we were playing with quickly grew more complex than either of us could keep track of, it is difficult to give a specific explanation of what is occurring here. The squares and oblique shadows are definitely created using the “mosaic” and “downstream key” effects from two MX-50 video mixers. The greenish background is some sort of no-input feedback pattern being generated by all of the mixers combined (hence why it appears to be moving in multiple directions at once). As the piece progresses, parameters are changed which affect the colors, patterns, and behaviors. This is an excerpt from a much longer recording which passed through many phases and modes. At this point in the recording, I was influencing the video, while Andrei influenced the sound, which was created using a software synth and granular filter. I decided to title this excerpt “metasomatism” after the name of a process in which the composition of rocks is altered by fluids.


This experiment emerged from some thoughts I had about reaction-diffusion systems, particularly a comment from Joost Rekveld which mentioned how Turing patterns can be generated using an iterative blur-sharpen process. I decided to try setting up a feedback loop that passed through two pairs of cameras and LCD screens — one set to blur the image by defocusing the lens, and another with the maximum possible sharpness in the focus and camera settings. The resulting video feed did not look like a Turing pattern, but it definitely formed some sort of intriguingly goopy reaction-diffusion. I recorded the output by re-scanning one of the LCD screens with an HD camcorder. I paired it with a processed field recording as the sound component. The original sound was captured from a set of pentatonic windchimes on Leslie Rollins’s porch in Michigan. I then sent the recording through a series of processes using a set of plugins called GRM Tools.


This piece was created using the same patch as the last one, but this time, the internal feedback on the MX-50 was inverted (or “negative”). As the camera feedback undulated, I scrubbed the downstream key back and forth as I recorded. In Premiere, I doubled the speed of the footage with “frame sampling” interpolation, so that it dropped every other frame, making the negative feedback less strobey and seizure-inducing. Sounds created with the same modular synth VST.


This piece was created using a camera feedback loop and the internal feedback and functions of an MX-50 video mixer.

The camera loop ran between the same color security camera as my last experiment, and a 7” LCD monitor designed as a backup screen for cars. The signal passed through a V4 video mixer, which then sent it onwards to the MX-50. The MX-50 was displaying the internal feedback loop as its output, and the downstream key was on, set to the camera feedback as its source. As the camera feedback ebbed and flowed, the downstream key picked up its contours, and the internal feedback caused them to echo and color shift as they faded away into the background. In Premiere, I reversed the speed so the forms are emerging, rather than receding. I created the sound using a modular synth VST in Ableton.


I created this piece using a combination of camera feedback and vector synthesis footage that I recorded previously. I used a black-and-white security camera pointed at one of my computer monitors to re-scan the vector footage and send it into my V4 video mixer. I used an LCD monitor and a color security camera to create an optical feedback loop, which I keyed under the vector footage, sending the combined output back into the loop. This is a very simple setup, but the properties of the cameras I used have very distinct effects on the quality of the image and the feedback. The black-and-white camera, which is somewhat broken, is extremely grainy, which made the keying edge very rough and artifacted (not something I was wild about, but part of the point of this project is to let go of some of these sorts of details in the greater interest of experimenting). The color security camera is also somewhat broken — when I found it, the cord that powers a focusing element within the lens was snipped, so the only current way of controlling the focus is by partially unscrewing the lens, so the focus is very soft and some of the bokeh seems to create color effects which are amplified by the feedback. It has an auto-exposure function that you can control with a tiny screw-pot, which I exploited to create this self-correcting system that cycles through different colors on its own. The sounds were, again, created by virtual instruments, with automations on the parameters modulating them over time. I had a lot of fun making this song, and found that a lot more time had passed than I realized, which left me less time to create the video portion — but again, such is the nature of this project!


For this piece, I used two MX-50 video mixers patched together in a “figure-8 feedback loop” — each with one output plugged into the other, and another output plugged into one of its own inputs. (This is a patch we often use in Fugitive Dream Recovery.) The squares are created by “mosaic” mode on the first mixer, with “negative” and “strobe” mode in use as well, and “downstream key” creating the solid white shapes. The second mixer combined the output from the first with its own internal feedback, as well as the “downstream key” effect. While making the recording, I slowly changed the “mosaic” size. The goal was to create a self-propagating, cellular automata-like system. The audio was created using virtual instruments and drum machines. In Premiere, I added edits so that the video channel was inverted during a specific part of the song, and then reverses its speed.


This piece was created using a camera feedback loop running through two video mixers (see below for diagram). The camera first passed through the MX-50a, which was doing a simple, subtle colorization. The feed then passed into the Edirol V4, which black-luma-keyed the feed over a negative version of itself. This causes the black areas of the image to become light again, creating an increasingly turbulent perturbation of the feedback, depending on the key clip level. (I call this “psychokinesis mode,” because it is the characteristic effect from my Psychokinesis series; this is a chromatic version of the same patch, hence the title “Psychochromatic.”) The output is then passed into the LCD screen at which the camera is pointed, completing the loop.

During the course of this recording, I slowly increased the key clip on the V4, allowing more and more of the negative image to pass through. I also continuously moved the camera very slightly, and changed the focus as well. The sound was created using software VSTs — I built up the sound loops at the same time as I was working on the video patch, letting each one go on its own as I actively worked on the other.


This piece was created using camera/LCD screen feedback, interrupted by a Cokin Multi-Image filter which I held in my hand and rotated. The camera was deliberately out of focus, and I was moving it slightly with my other hand. I changed the focus slightly over the course of the recording, and at about 1:04 you can see it go into much sharper focus, creating a detailed pattern of fine lines. The footage came out very green, so I did color-correction in Premiere using RGB curves. There are three layers of sound; two were created using VST synths, and one was created using a sampler loaded with a recording I made of water running through a storm drain in the sidewalk. I manually triggered “slices” of this recording using a midi controller.