Devon Schiller is a Member of the Academic Staff in the Department of Image Science at Danube University, Austria, where he lectures as a media rhetorician and visual semiotician. Originally from Boston, Massachusetts, He holds a BFA in Art History and Painting from the Kansas City Art Institute, an MA MediaArtHistories from Danube University. From a theoretical framework of cognitive semiotics, emotions history, and image science, Schiller’s scholarship focuses on the media genealogies and visual rhetoric of physiognomy, the science of facial expression, and digital biometrics. Devon is certificate trained in the Facial Action Coding System (FACS), as well as the Neuropsychological Gesture Coding System (NEUROGES), and has conducted grant-supported research on automated facial expression recognition at the Fraunhofer Institute for Integrated Circuits (IIS). He is also an internationally exhibited digital artist.
To observe a river as it meanders is a bit like watching the grass grow. You can know it is happening. But you cannot see it.
Augmented photography can be used in the digital arts to over-code upon real-world environments with computer-generated data, in order to translate stimuli across sensory modalities, and thereby extent or increase our faculties for perceiving spatial and temporal relations. Because of this media-specific affordance, the augmentation of the photographic medium may have especial application for the “physiognomic gaze,” a way of doing “form interpretation” or “nature knowing” based on the physical behaviors and psychological phenomena of the human face, head and body. The innovativeness of such technological prosthetics becomes manifest how new ways are generated to both perceive and to know those experiences that were previously unseeable or otherwise unsensable. Here, I converse with Cedric Kiefer (co-founder and creative lead) of the onformative studio for digital art and design in Germany about their works Meandering River (2017), Pathfinder (2014) and Google Faces (2013). And we explore how onformative uses the augmented photograph in their digital artworks to extend the physiognomic gaze, bringing data not visible to the naked eye into the senseable sphere, to offer the audience different perspectives about space and time.
If physiognomic art and science are to bridge further, and not only represent but refine knowledge about face, then today’s artists and audience must gaze back into the black box, and cast the light of Diogenes on how media inform how we think about what we feel.
With the algorithmic age of computable emotions, an increasing number of digital artists base the form of their Internet or sculptural installation on Automated Facial Expression Analysis (AFEA), and its functionality achieved via the photographic documentation in face databases. These contemporary artists make visible a digital habit of thought that objectivates the human face into a plastic grotesque of grimacing extremis, and the self inside out into the universal or utilitarian. Yet, most AFEA systems – a term little clarified and much confused with facial recognition or biometrics – are “black box” frameworks. Introduced by the technological industry and scientific experts, such proprietary closed source algorithms veil the majority of program functionality input from available data output, hiding how it works from immediate observation by artist and audience. By problematizing Julius von Bismarck’s Public Face (2008-14) and its intermedial genealogies, I probe the extent to which AFEA represents the face and its expression of emotion from a technostalgic view that reduces scientific complexity, while informing how we think about what we feel today.