The Making of 09/09/2020

Creating something for the Cine Chamber, developed by Recombinant Media Labs, is much like making any other kind of video work: you have to gather footage, you have to consider the pacing, the audio has to be incorporated, there’s lots of cataloging, and time spent actually editing. However, there are some very real differences which I’d like to explain in this description of how I built my piece, 09/09/2020.

The System

Image of the installation of the Cine Chamber at Gray Area. One man is up in a scissor lift 12 feet in the air securing the screen to the ceiling, while 3 people on the floor watch.
Cine Chamber getting installed at Gray Area

What’s different about the Cine Chamber is that it isn’t a proscenium type of presentation with a well defined frame that has a left, right, top, and bottom. It’s more of an Ouroboros, it wraps around you both visually and aurally, creating an immersive experience that the viewer sits (or stands) inside of. From a conceptual point of view, you as an author/creator have to remember that you are building a world, as opposed to framing a part of one. It doesn’t have to be analogous to the real world or even to a VR / 3D graphic type of experience. It sits apart from those kind of worlds because it is a wrap around rectangle that you physically are in and it creates a solid boundary with the screen that separates the viewers from the world, while not completely enveloping their entire vision. It is, first and foremost, a space that the viewer occupies.

10 projectors make the images on the screens, 3 on the long sides of the rectangle, 2 on the short sides of it. The installation is front projected with the projectors hung from a grid from the ceiling. The system is driven by TouchDesigner and a very beefy computer – it needs to be able to play 10 HD movies simultaneously. For the audio portion, there are 8 channels, with the 8 speakers spread around the room, behind the screens. Files are delivered separately, 10 video files (one for each screen/projector) and 1 audio file (that can contain up to 8 channels). The modules (as individual pieces are known inside of our TouchDesigner project) are configured with an XML file that TouchDesigner reads and uses to play the desired piece.

Visual Design

Image is of part of the Cine Chamber template layout in After Effects. It shows the 10 screens all laid out in a row.
10 screen layout

To build something for this system, you have to be able to design on 10 screens simultaneously. Fortunately, the thing about video is that it is almost always arbitrary – resolutions really don’t matter to computers, aside from how powerful of a chip you need to drive the rendering. What we call 720p or 1080p is just an agreed upon format. This is very different from film, which has a physical property (i.e. 35mm) that drives every aspect of how it is captured and presented. What this means is that whenever you are working with video, you can pick any resolution you want, provided you have an output device that can show it. In the case of the Cine Chamber, your layout is 10 times wider than a normal screen – if you’re making a piece that is in 1080p (1920×1080), that means that whole layout is 19,200 x 1080. We have an After Effects template for building in this layout. You can get the template here.

Audio Design

Screen shot of Adobe Audition with 8 track session for 09-09-2020 loaded in.
Screen shot of 8 track session for 09-09-2020

Just as enveloping as the visuals, the sound is all around the viewers. It doesn’t matter how you mix it or create it, but you need to deliver it in an 8 channel format, I used WAV. For developing a multi channel audio piece, I used Audition to both edit and clean up my source material, and to stitch my individual pieces together into the final 8 channel file. I conceptualized how the sounds would move around the room by repeating the same pieces at different intervals to give the feeling that sounds were going around the viewer.

The trick with a non standard, multi channel audio file is that you have to set up a multichannel session with the 8 channels first and keep it separate from the other files you are working on. The files that are dropped into each track should be mono. For each of my audio files, I had to make them mono first, make audio adjustments, and then cut/copy/paste individual bits together. This is a really amazing how-to video that I followed to figure out how to do it.

Using the Template

Screen shot of Adobe After Effects interface with the project for 09-09-2020 with the 50 layers of the deers running section shown.
The 50 layers of the section with the deers running from the fire.

Because After Effects is really an animation program, and not a non-linear editor like Premiere, every element gets its own track. One trick is that you can duplicate the main layout and create various sections in those duplicates, and then bring them back into the main layout for final arrangement and rendering. For this sequence with the running deer, each element is a very short video cut to about 2 seconds, that changes position and plays at different times. In this screen shot, you see the one section of the piece that has the running deer and how that it’s composed of 50 different elements (which is the same 2 second video repeated) that are offset by both time and distance over 28 seconds of time.

For rendering, it’s best to output to Adobe Media Encoder and have it make the videos, as opposed to After Effects. Media Encoder has many more options and you can queue up renders. H264 as an MP4 is the format that you should use for your renders, as it will be the most reliable for consistent playback across different set ups.

There are some more detailed and step by step instructions for the template in this google doc.

Generating the Content

The inspiration for the piece came from a trip to Groundswell for Beltane (May Day). The festival involves constructing and dancing around a May Pole. The previous year’s May Pole is then burned after being taken down. Between the sounds of nature at night and the burning flames of the old May Pole, the inspiration to make a surround video experience of the forest fires happened. So, with my cameras and audio recording equipment, I set out to capture the sounds and images I wanted to work into the piece.

The visual imagery comes from multiple sources that can be broken down into 3 different buckets: my own still photos, my own videos, and those photos and videos that I scraped off of the internet. One criteria I used for the internet imagery was that it had to published or captured on September 9th, 2020. This was in keeping with the conceptual theme of the experience of that day. The videos and photos I scraped all had to be sorted and edited down several times. I do this process a lot with the work that I do with creating visuals for DJs and musicians. In order to get anywhere near the amount of images/videos that you can work with, you need at least 10-25x the amount of what you want to show as your pool of content to draw from. The editing process can be very brutal and fast, but you have to have enough to go through all of it to create what you want. Different things may not work with one another or connections start to appear as you re-arrange the pieces over and over again. I also sorted things according to concepts, size/format, and quality. 

For the audio, I used a Tula Microphone to gather found sounds. My original intent was to get the night sounds of the creek and frogs. I was able to get that, but my friend’s dog, Vinny, is deaf, and when he gets lost or loses track of people, he howls until someone comes and finds him. He interrupted my gathering of these sounds a few times, and I realized that it could actually work as a warning sound for when the fires start.  The other sounds are a train crossing warning bell and the horn of a passing train, recorded at the 16th street Cal Train crossing in San Francisco. I have always been attracted to concepts of Musique Concrète, specifically how found sounds and the environment around us can be considered as a sonic composition. Environmental sounds to create an environment seemed to be the best way to go.

I broke it down into 4 sections: the peaceful opening with nothing but the sounds of frogs and a babbling stream, the howling dog that warns us of the lightning which is starting the fires, the fires build up as the alarm blares and the media reports combined with images and videos fill the screens in a pattern moving around the screens, and then lastly, followed by the sound of the water again with the image of the frog on a rock in stream combined with slow motion boiling water. As a 3 minute piece, it got broken down into the beginning and closing both 30 seconds each, the howling dog is about 30-40 seconds long, with the sirens and tram whistles taking up the remaining 1.5 minutes. Remember, when working in a time based art, you always have to decide on the length of the piece fairly early in your work, because it is one of the defining dimensions of your piece.

The End Result

You can’t adequately capture the experience of a Cine Chamber piece with a camera, it’s just not really possible. But this video should help you to understand what it’s like.