The screen – evolving from a painting to the screen of Cinema1, onto television and extending to the plethora of digital devices available to mass audiences today has often been conceived of as a window to another world. According to Husserl’s (2005) phenomenological reading of the screen, viewers of the cinematic screen stop perceiving objects in the everyday world and are eventually immersed in a second world where they suspend common concerns about the existence of the “real world.” The smartphone with its constantly enlarging, button-less screens combines immersive viewings experiences and hyper-connectivity through a plethora of social media apps. Engagement generated by these mobile smart devices is pervasive to the extent that most of one’s news about the external world is consumed through them. It is also the preferred mode of conversation amongst the people we know and a large portion of one’s interaction with the world seems to occur through the medium of this screen. Thus, mobile screens, affording a network of dynamic data to a mobile user manifest an “augmented screen” in one’s everyday life. The notion of Augmentation involves overlaying layers of dynamic information onto physical space through digital media like screens, mobile devices and cameras. As such, Virtual Reality (VR) and Augmented Reality (AR) are often discussed as differing modes of new media, but Lev Manovich (2010) argues that immersion into a virtual space and augmentation of real space could be imagined as a difference of scale. While cinema and television screens draw viewers in to the extent that they are often unmindful of their surroundings, the smaller screens of mobile devices force their users to occupy a kind of dual reality.
Alternatively, AR is also often discussed as a specific set of technological processes. This image processing technology that allows digital objects and animations to be projected and tracked onto live images from a camera – is now supported by many smartphone processors. This kind of AR imaging technology allows for a blending of real and virtual images to produce a third – augmented image.
Of the many AR features populating the mobile landscape, the most popular are perhaps the Snapchat and Instagram lenses, typically used to alter or animate the image when taking selfies. These lenses work as augmented filters, overlaying and mapping specific content (a scary mask or kawaii-style make-up) along the viewer/photographer’s facial features. In the processes of aligning one’s face to match the expected co-ordinates of features on the filter, one often has to stare still at the selfie camera for a few seconds – becoming a viewer of the augmentation as it appears before capturing the image. Another kind of AR application are mobile games that use a combination of geolocations or GPS coordinates along with the camera sensors on the phone to create fictional narratives and game-play scenarios. Pokémon Go, one of most popular AR games utilizes the GPS system to help users navigate to locations where a digital model of the fantasy Pokémon creature has been placed by game algorithms. The purpose of the game – to “catch ‘em all” is accomplished by collecting more and more of these digital objects by walking around one’s surroundings. Immersing the player simultaneously into fictional narratives of the screen and their physical, built environments, AR imaging has allowed for gaming to transcend long held stereotypes of the mindless, isolated player living within virtual’ realities, perpetuated by popular science fiction.
Immersion has always been part function and part challenge for screens – to make their images, characters and fictional worlds manifest for audiences and sometimes to also make the audience “forget” their real worlds and inhabit this second world. Augmented reality applications complicate this act of immersion, projecting digital fictions right onto the surface of one’s “paramount reality” (Kassab, 1991) so that one doesn’t necessarily appear different from the other.
The worldwide popularity of Pokémon Go also led to many news headlines about the potential dangers of AR, along with photos of young people pointing their phone cameras at apparently nothing in the middle of the street. However, to a player these ordinary spaces were augmented with a new layer of information, accompanied by its own biases and exclusions – The Pokémon Go map flattened the world into 2D boxes and transformed the diversity of spaces (from luxurious high-rises to crowded bus stops) into flat, neutral green land. The act of collecting Pokémon which are derived (more often than not) from real animals suggested a forest-like land with an abundance of magical wildlife. Once a Pokémon is spotted, the green land is transformed into the image from the camera’s sensor. Finding Charmeleon2 – the fire breathing dragon, at the local grocery store was a surprise on many levels. This rare Pokémon, as it manifested on the counter of the local shop was viewed on my phone by other customers around me. Though the app and the technology seemed to fascinate some of them, my excitement, shared by the friend accompanying me was perceivably alien to others around us. In that moment, the structure of the game that allowed for these creatures to be found in unexpected locations, blended into the structure of my “real life,” where Chameleon had really manifested and yet, was another mundane image on the screen. It is this surprise that I hope to articulate with my project, Wild Life.
Wild Life emerged after having observed waves of Pokémon Go players at the peak of the game’s popularity in Bangalore in 2016/2017. Running into groups of youngsters at popular poke-stops where there were enough digital objects for everyone to catch, I began spending more time posing my Pokémon just right in the environment and taking pictures to show to no-one in particular. The fictional reality within Pokémon Go is not far from a colonial endeavour, where “wild” creatures with great potential powers are sought out, captured and trained, and apparently love their masters for it. I found myself getting attached to or empathizing with certain Pokémon, and even imagining their possible lives outside of my inconsequential collection.
Augment and Holo are two mobile AR apps that allow their users to project 3D models onto live images of their surroundings. Accessing the phone’s camera, these apps can track, map and scale models of various things (from elephants to Donald Trump) to the space in the frame of the camera, making it appear as if the Trump himself were present there. Walking around sub-urban Bangalore with these potential projections in my hands, I began replicating my experiments with Pokémon photography, into a kind of imaginary wild life documentary photography project.
Augmented wild life photography (through these apps) presents a unique experience equidistant between that of a zoo, wildlife documentary, and a videogame. One observes these digital animals and their coded “natural” movements, waiting for the camera to adjust. Once the camera’s image is mapped to the 3D models, these animals appear to inhabit one’s surroundings, within which they can be moved, rotated and scaled to one’s satisfaction. As you can imagine, this allows for many humorous photo-ops to play around with… the most common being the manifestation of an “elephant in a room.”
As I began writing about these augmented wild animals, I stumbled upon Safari Central – a smartphone app launched by “Internet of Elephants” in 2017 which allows its users to project 3D animated models of six real animals onto surfaces in their surroundings. The organization claims to have built the app to “see a world in which we care about animals on a daily basis.” Internet of Elephants’ larger campaign also included a photo competition that resulted in the world’s first Augmented Reality Wildlife Photographer of the Year Award. A large portion of the images from this competition included amusing selfies taken with these animals in the frame, and often included children in the picture. While Safari Central claims that technology has helped them create stronger connections between the animals in conservatories and people across the world, I wonder how or to what extent this holds true. While the app does allow people to observe and perhaps be fascinated by the few animals it presents as AR models, the process of engagement centred around capturing a photograph of or with the animal usually overcasts any other concerns that might appear within the viewer. Instead of evoking some connection to these faraway animals, they might enable a kind of alienation from the animals’ actual habitats and lives as they are appropriated for entertaining photography. The AR animals then, are only as real as animated cartoons (albeit interactive ones), conveniently manifesting and disappearing for the viewers’ entertainment.
My images were made without this reference but perhaps with similar concerns. Except, instead of attempting to build any connections between the viewer and this digital representation of the animals, the images seek to confuse the real and the representational. Situated in overgrown plots in suburban India, the AR models I shot with range from common “wild animals like elephants, tigers and rhinos to fantastical and extinct ones like dragons and dinosaurs. These abandoned plots of land often serve as proxy garbage dumps for the locality, eventually becoming unpleasant spaces. Their role in the urban conception of nature is even harder to gauge as they symbolize the environmental crisis (with their visible mounds of rubble and garbage) while being some of the only lands allowed to transform “naturally” in densely built environments. The animals posed in these shabby landscapes are framed in reference to the tradition of wildlife photography, with me following many guides on how to be a better wildlife photographer. For example, a lot of the images are taken from a distance, so as to not disturb the larger animals in their habitat.
There is a conscious desire for uncanniness in these images; to position the digital animals just right, so their shadows fall realistically on the ground and to capture them in moments of action like a silent birdwatcher. But, there is also a kind of blatant humour, perhaps visible in some of the less convincing captures, with which I approached making these illusions. Once conscious of this negotiation between verisimilitude and parody, I began filtering my images, getting rid of more fantastical models like that of a Cheshire cat while also maintaining or inducing subtler hints of fantasy through changes in scale and placement. Of this negotiation emerged a giant butterfly resting on a plot of rubble and wet mud and a lo-fi anglerfish floating in a rain puddle.
However, unlike (professional) real wildlife photography, all of these images are of laughably low-resolution. In fact, the photography occurs at the level of screenshots – documenting the augmented images that appear on the screen, with the logos and watermarks of the two apps used in making the images running through all of them. Seen closely, both the tiger and the overgrown patch of land it rests on appear pixelated while the watermarks remain crystal clear. The mobile camera accessed through these apps, focuses on the space in-front of it, deciding where the floor, walls and ceiling lie and accordingly place my chosen models within the scene. Roland Barthes in Camera Lucida says:
“The photograph is literally an emanation of the referent. From a real body, which was there, proceed radiations which ultimately touch me, who am here; the duration of the transmission is insignificant; the photograph of the missing being, as Sontag says, will touch me like the delayed rays of a star.”(1981, pp. 80, 81)
He speaks of a whole different notion of photography, from a time before the possibilities offered by digital images, computer graphics, virtual worlds, and instantaneous network communications. While it might be obvious to say that the referent has ceased to exist, perhaps it is the nature of reality that has changed to accommodate many parallel narratives. Our cameras, now embedded into our extended bodies detect many invisible layers of information, perhaps exposing the lack of a fixed referent at all. As for the missing being – my mobile camera has become good at conjuring phantoms, albeit pixelated ones. As such, these Wild Life images are an attempt to capture the reality of my screen, and the fictions I build within it.
- Barthes, R., 1981. Camera Lucida: Reflections on Photography. Translated by R. Howard. New York: Hill and Wang.
- Husserl, E., 2005. Phantasy, Image Consciousness, and Memory (1898−1925). Dordrecht: Springer. https://doi.org/10.1007/1-4020-2642-0
- Kassab, E.S., 1991. “Paramount reality” in Schutz and Gurwitsch. Human Studies, 14(2–3), pp. 181–198. https://doi.org/10.1007/bf02205602
- Liberati, N., 2018. Phenomenology, Pokémon Go, and Other Augmented Reality Games. Human Studies, 41(2), pp. 211–232.
- Manovich, L., 2002. The Language of New Media. Cambridge MA: MIT Press.
- Manovich, L., 2010. The Poetics of Augmented Space. In: Mediatecture. Vienna: Springer.
- Lev Manovich makes this connection between the screen of cinema and painting in the Language of New Media, saying: “Just as painting before it, cinema presents us with familiar images of visible reality – interiors, landscapes, human characters, arranged within a rectangular frame.” (2002, p. 26)
- Charmeleon is quite rare on Pokémon Go but was one of the most popular ones on the Anime series. With my interest in Pokémon stemming from a childhood of watching the show on TV and later, becoming an amateur card collector, Charmeleon became something of a personal goal on all versions of the game.