To cite this contribution:
Rapoport, Robert. ‘Deep Compositing: Performance, Augmentation, and Voids.’ OAR: The Oxford Artistic and Practice Based Research Platform Issue 1 (2017), http://www.oarplatform.com/deep-compositing-performance-augmentation-voids/.
Cinema has been compositing images for over a century. Double exposure, matting and chroma-keying were all visual techniques that required a new embodied understanding by the actor, so that the actor’s performance could be projected into a composited future. With the coming of augmented reality (AR) the logic of compositing deepens in both space and time. Using 3D meshes and volumetric capture compositing becomes ‘deep’. 2D cinema sets used to require what Flusser called ‘a new imagination’ between the material and computational worlds.1 Under conditions of deep compositing, the imaginative labor of a performance is increasingly delegated to processors, operating in real-time. As this technique spreads, productions will increasingly leave strategic voids into which digital assets can be poured. Lev Manovich has argued for the need to explore the ‘substance’ of these voids.2 How does a landscape made up of such dynamic spaces change one’s behavior?
This video takes two compositing techniques – the chroma-key and 3D mesh – and gives them a presence on a 2D set in the form of blue and magenta netting. The behavior of this netting is subjected to chaotic forces – light, wind and bodies – which make a convincing composite absurd. The aim is to highlight how the act of inference inherent in real-time compositing is performative on a number of levels.3 What are the poetics of performing with or against these processes? How does behavior under these conditions provide a microcosm of larger epistemological questions that AR will bring? What is the temporality of a site/ landscape of such dynamic voids?
1. Vilém Flusser, ‘A New Imagination,’ in Writings, ed. Andreas Ströhl, trans. Erik Eisel (Minneapolis: University of Minnesota Press, 2002), 114.
2. Lev Manovich, ‘The Poetics of Augmented Space,’ Visual Communication 5 (2006): 237.
3. For an example of performative compositing see: Shunsuke Saito, Tianye Li, and Hao Li, ‘Real-Time Facial Segmentation and Performance Capture from RGB Input,’ in Computer Vision – ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part VIII, ed. Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling (Cham: Springer International Publishing, 2016), 244–61.
About the author:
Robert Rapoport’s work focuses on video production as a lens through which to view larger shifts brought on by automation. He was recently a research fellow at the Digital Cultures Research Lab (DCRL) at the Leuphana University, Lüneburg, Germany. He has taught both theory and practice in a number of contexts including the History of Art Department at Oxford, Sarah Lawrence College, The University of Lüneburg and The Hamburg Media School. More information here.