ICIDS 2018 Art Expo

‘Non-Human Narratives’

Studio 2, The Science Gallery

Thursday 6 Dec. – Saturday 8 Dec. 2018

The ICIDS 2018 Art Exhibition provides a platform for artists to explore digital media for interactive storytelling from the perspective of a particular curatorial theme. This year the theme is Non-Human Narratives.

The idea of assigning this particular theme of ‘Non-Human Narratives’ to the exhibition emerged from an intention to inspire, not constrain, and has itself undergone a certain amount of evolution, afforded by the set of submissions. We live in a time when computational processes and non-human agents have disrupted and/or reconfigured not just traditional narrative modalities, but also the human subject itself. Artificial intelligence (AI), data analytics and analysis, and human–robot/human–animal interactions now create new modes of communication and intersubjectivity. Human and non-human elements also combine with natural, geological, and atmospheric forces (as with climate change) to reveal new cultural stories from radical perspectives beyond pure human understanding and experience. The purpose of conceiving a focused theme is to encourage artists to respond to these new conditions and to consider how contemporary non-human narratives can challenge, inspire, and reveal radical, non-anthropocentric perspectives on our current states of being among world-matters. What comes before, goes beyond, or is otherwise entangled with the human agents who tell our stories in this complex media ecology, and how might we come to know them through our interactions with them?

The concepts of non-human narrative, charted in this exhibition, are explored across various disciplines, languages, cultures, technologies, and histories. Together they reflect the diversity and deep entanglements of the subject matter as expressed by the artists through the many genera of media and materials.

The exhibition is housed in Studio 2 of the Science Gallery for the duration of the conference.

Opening ceremony: Wed. 5 Dec. @ 6pm (for conference registrants only).

ICIDS 2018 Art Expo is kindly sponsored by:

ICIDS 2018 Home

A Place Called Ormalcy

Artist: Mez Breeze

“A Place Called Ormalcy” is a dystopian fiction comprised of seven short text Chapters combined with embedded 3D/Virtual Reality tableaus. The work is designed to be viewed on mobile devices, desktop PCs and via a large range of Virtual Reality hardware. The story of “A Place Called Ormalcy” unfolds through a series of snapshots of the life of Mr Ormal, a law-abiding citizen who resides in the aesthetically cartoonish world of Ormalcy: Ormalcy exists in an alternative universe complete with its own idiosyncratic language patterns. The world initially presents as Utopian, and full of innocent “claymationesque” contented creatures and happy denizens, but as the story creeps along it becomes apparent that in actuality, this allegorical fiction in fact traces the makings of a dystopic society. It illustrates how fascist principles can arise in the most benevolent of places the story emphasis lies with how nefarious a process this is, and how this affects all (especially non-human agents such as animals, plant life, and both cultural and environment-based ecosystems). This XR/VR Experience has social commentary at its very core, including allusions to the rise of current totalitarian trajectories (and contemporary malaise, confusion and accompanying acclimatization patterns).

A Walk Down Bhūmi 7

Artists: David Teather and David McCulloch

A human is reincarnated as a conscious AI. The AI practices Buddhism and through their practice they remember past incarnations and fragments of memory that span the entire breadth of life, from humans to protozoa. This BodhisattvAI has surfaced their subconscious as collage and interactive text that allow participants to reflect on the interconnectedness of non-human and human perspectives as they traverse a collective anthropomorphic and human subconscious, with choices modifying the structure of this subconscious during its display.

Fragments of text have been created by seven authors made up of artists, academics, musicians and writers, and imbued with meaning using a machine learning algorithm that compares 489,300 potential relationships between each fragment and connects them based on semantic similarity. The result is a story landscape that traces lines between living things and enables an exploration of a world where humans and non-humans collaborate.

The work was authored using a tool called a Semantic Tapestry that was created by David McCulloch as part of a master’s dissertation on emergent narrative authoring at the Glasgow School of Art under supervision by Dr. Sandy Louchart.

Augmented Play, After Samuel Beckett

Artists: Néill O’Dwyer, Nicholas Johnson, Enda Bates, Rafael Pagés, Jan Ondrej, Konstantinos Amplianitis, David Monaghan, and Aljoša Smolic

Augmented Play is the third and final part of a two-year research project that reinterprets Samuel Beckett’s ground-breaking theatrical text, entitled Play (1963), for digital culture. It is a six degrees of freedom (6DoF) interactive, augmented reality (AR) re-imagining of Samuel Beckett’s ludic theatrical text.

The user embodies an ‘interrogator’ who must elicit the testimonies of three characters involved in a love triangle, by focusing attention on them and triggering their monological confessions. Play is set in a purgatorial afterlife, and links to the theme of non-human narratives because the text is inherently paranormal; it speculates on post-life possibilities that are beyond the limits of rational human knowledge systems.

Users interact either by donning a Microsoft Hololens (AR head-mounted display) and gazing at the installed exhibition props, or by pointing a smart mobile device at them. In each case a custom-built interactive app loads a ghost-like AR apparition of the condemned character. The play is also non-human in the sense that it is inhuman; the audience are afforded a position of cruelty, in their perpetuation of the infinitesimal, purgatorial Pavlovian trail, through and by the empowering nature of digital technologies.

The project investigates new opportunities for embodied storytelling in immersive AR.

This work emanated from research supported by grants from Science Foundation Ireland (SFI) under the Grant Number 15/RP/2776, the Trinity Long Room Hub (Interdisciplinary Seed Funding, 2017-18) and the Trinity Provost’s Fund for Visual and Performing Arts. The artist are grateful for the support of Edward Beckett and the Estate of Samuel Beckett.


Artist: Nick Montfort

Autopia is a simple text generator that uses a semantic grammar to generate very short narratives, in the form of headline-style sentences. These, presented as if they were endless traffic, are made entirely of the singular and plural names of cars — no other lexemes are used. Nevertheless, the sentences, mostly made of automobile names from the United States, recapitulate encounters between native people and Europeans (GRAND CHEROKEE SHADOWS EXPLORERS), comment on class (NEW YORKER GOLFS), offer mathematical results (OPTIMA FIT MATRIX AXIOM), and even describe immigration (AMIGOS FORD RIO). In Autopia, the narratives of our vehicles run endlessly on their own, driverless.


Artist: Lindsay Grace

How can we turn our everyday play into a something productive? How can we work to make the mundane experiences of computer interactions into something that is more engaging? How can such work augment artificial intelligence and last-mile data challenges to improve our understanding of the world around us? The News Defender game set combines the concept of Human Computation Games and data filtering with the common mechanics of popular play.. Each uses humans in place of machine algorithms to refine content that relies heavily on heuristic data. Players must identify fake news content, by image, domain or headline. This is done by playing a shooting game, aiming solely for the fake content. Each time they shoot, their selections are logged. When they finish the game, they can see their performance in comparison to others. The resulting data sets can be compared to automated fake news detection systems or used to refine subsets of hard to identify content.


Artists: Alejandro Albornoz, Roderick Coover, Scott Rettberg

Penelope is a combinatory sonnet generator film based on the Odyssey, addressing themes of longing, mass extinction, and migration. Recombinations of lines of the poem, video clips, and musical arrangements produce a different version of the project on each run. Using a similar combinatory structure to that of Raymond Queneau's Cent mille milliards de poèmes, the computer-code-driven combinatory film can produce millions of variations of a sonnet that weaves and then unweaves itself. The program writes 13 lines of a sonnet and then reverses the rhyme scheme at the center couplet. Each 26 line poem is produced as an audiovisual composition, with lines spoken by voice actress Heather Morgan. The system determines their composition, produces and plays the video and musical composition, and displays the text of the generated poem before composing a new sonnet pair. Penelope is co-produced by Alejandro Albornoz (sound), Roderick Coover (video), and Scott Rettberg (text and code). Oboe solos are by Kristiansand Symphony Orchestra musician Marion Walker. The video were shot on location and the text developed during 2017 residencies at the Ionian Center for Arts and Culture. Actors include Kefalonian residents Helen Amourgi, Kostas Annikas Deftereos, and Sophia Kagadis.

Salt Immortal Sea

Artists: Mark C. Marino, John Murray, Joellyn Rock, Ken Joseph

Salt Immortal Sea is a storycycle that adapts the tales from the Odyssey to the Syrian Civil War. At the same time, the system adapts each story depending on either the teller or the listener. By playing through the timeless dimension of these stories, we represent them as extra-human, standing outside of human history, as archetypal patterns that repeat themselves over the centuries. These stories themselves adapt and reform to fit the specific moment of time and telling.

Sound Spheres: Interactive, Participatory Sound-based Narratives

Artists: John Barber and Greg Philbrook

Sound Spheres is a web-based interface combining computational digital media and storytelling techne with which participants can create interactive, participatory sound-based narratives. The interface visualizes a night-time city skyline. Atop one building is an antenna mast. Periodically, this antenna broadcasts multiple colored spheres. These spheres circulate above the city skyline, rebounding from the monitor's edges. More spheres are broadcast at regular intervals. Each sphere carries a unique non-human audio sample. Participants may construct stories from these sound spheres in three ways. First, by moving the cursor to intersect the trajectories of the sound spheres, participants can hear the audio samples they carry. The cursor is a listening device. Second, participants can position the cursor anywhere on the screen, and wait for sound spheres to pass within its range of hearing. As sound spheres approach the cursor, they glow and their audio contents are heard. Finally, participants can click up to five sound spheres, moving them into an audio player outlined by one of the buildings in the city skyline. In these ways, participants can create serendipitous linear narratives based on interactivity and narrative elements provided by the selected sound spheres.

The State of Darkness

Artists: Pia Tikka, Eeva R Tikka, Victor Pardinho, Tanja Bastamow, Maija Paavola, Can Uzer, Ilkka Kosunen

In the Enactive VR installation State of Darkness you will find yourself in the prison cell of some unrecognized country, locked up with a distressed person. Your feelings provoked during this encounter are tracked by biosensors, connecting your fate to that of your fellow prisoner.

The concept builds on the idea of symbiotic co-presence of a human and a non-human. Here the theme of non-human narrative is associated with the narrative experience of our virtual character Adam B. Trained by a range of human facial expressions, he has learned to control his expressions in encounters with humans. Adam B will experience his own non-human narrative that is influenced by the behavior of the participant as tracked in real-time by a set of biosensors. In the end, Adam B’s life story emerges from the complexity of his algorithmically simulated mind, hidden from the participant. In the world of State of Darkness human and non-human narratives coexist, the first experienced and lived by the participant, the latter by the artificial character Adam B, as the two meet face-to-face.

Supported by Finnish Cultural Foundation, VCL Lab Aalto University and EU Mobilitas Pluss Grant Tallinn University.


Artists: Serge Bouchardon et al.

Log onto a dating app and find love! Make sure your face shows your true feelings. You’re being watched…

"StoryFace" is a digital creation based on the capture and recognition of facial emotions.

The user logs onto a dating website. He/she is asked to display, in front of the webcam, the emotion that seems to characterize him/her the best. After this the website proposes profiles of partners. The user can choose one and exchange with a fictional partner. The user is now expected to focus on the content of messages. However, the user's facial expressions continue to be tracked and analyzed…

What is highlighted here is the tendency of emotion recognition devices to normalize emotions. Which emotion does the device expect? We go from the measurement of emotions to the standardization of emotions.

"StoryFace" is a Non-Human Narrative insofar as the emotion recognition device controls how the discussion with the fictional partner evolves. Our romantic relationships are controlled by non-human devices, and we are compelled to adjust our emotions artificially so that the narrative can continue. We are being watched...


Artists: Kristian Jonsson, Vanja Waller, Ben Clarke, Anna Yyngvesson, Joakim Jonzon, Maria Levander, Johanna Rasmussen, Alexander Holstner, Rickard von Friesendorff, Arvid Frykman, Ludvig Arnås, Fredrik Kaiser, Johannes Vinqvist, Robin Zeijlon, Emma Arltoft

Strobophagia is a digital game about how other people can, through the dehumanising nature of digital interaction, instill in us a sense of fear and paranoia. While it has been stated that the role of the player in digital games could be described as simultaneously actor and spectator, Strobophagia does not guide its players along a pre-written script or plot, such as in a play or film. Instead, Strobophagia’s narrative can be likened to that of an emergent digital choreography, where we guide the actions of two players, in separate ways, in order to elicit within them emotions which mirror the fears previously described. Pertaining to the theme of the exhibition, Strobophagia can be regarded as an exploration of the ways in which digital games are using the inherent non-human elements of the medium in order to blur the lines between storyteller, actor and audience.

Toxi•City: A Climate Change Narrative

Artists: Scott Rettberg, Roderick Coover

Six fictional characters confront life decisions brought about by the toxic floods of rising seas and storms in a computer-driven combinatory film that never plays in the same order twice. Set in on the US Eastern seaboard in 2020, this film follows six fictional characters whose lives have been transformed by sea-level change and flooding in an urban and industrialized region on America's North Atlantic Coast. Fictional testimonies are set against nonfictional accounts of actual deaths that occurred during Hurricane Sandy. Toxi•City is a combinatory narrative film that uses computer code to draw fragments from a database in changing configurations every time it is shown. As some stories seem to resolve, others unravel. Just as with the conditions of ocean tides and tidal shores, the stories cycle and change without clear beginning or end. Rather, individuals grasp for meaning from fleeting conditions of a world in flux. As the characters’ paths intersect, story threads come together. Toxi•City is a nonhuman narrative both from the standpoint of its generative delivery and because it represents a hyberobject, climate change, that humans have contributed to but which is now largely out of human control.

Critical Gameplay: Wait

Artist: Lindsay Grace

Wait is a simple game where the player is encouraged to refrain from acting on the world. As the player moves the world disappears, but when the player waits, the world reveals itself, becoming more interesting. The blowing grass reveals trees, the trees are followed by nature flitting about, as a small mammal and a few new flowers appear nearby. The majesty is found in the slow, controlled effort. Players are awarded points when the little things in life reveal themselves.

Thematically, Wait reminds players of a mode of operation and interaction we are perpetually discarding. The value of waiting, of slowing down, and working at the pace of nature. Faster is not always better, especially as our goals to get their faster, to make things bigger, destroys the majestic world around us. Instead of practicing the destruction of racing forward, Wait is about being rewarded for the ability to take in the moment. AS players wait, the world But, like all things, these moments are not without their own momentum. Players who wait too long without being a part of the world, will also see it fade away. The game encourages being present in ways few games do.

You•Who? Customised Cinema Experience

Artist: Chris Hales

A humorous fiction film installation (10 minutes) that incorporates data (visual, audio, text) obtained from a participant and renders it in real-time into pre-existing film segments as it is being watched. The protagonist in the film becomes haunted by and then 'possessed' by the data provided by each participant. In this respect the original 'human' narrative gradually becomes a 'non human' (digital data) one. The film also has non-human variance resulting from the machine-learning algorithm that attempts to detect facial emotion. Although not interactive in the classic manner the watched film does not exist until the moment of viewing and differs each time according to participant data, hence the term 'customised' rather than 'interactive'.

The project extends the traditional interactive fiction film genre by:

  • Using pre-filmed segments as templates, rather than finished sequences, which are completed by rendering only once various data have been received from a viewer and integrated (using After Effects) into the sequences. This adds variance and a 'real-time' aspect to the tradition of using pre-filmed material.
  • Exploring the possibility of branching not by user selection but by applying a facial-emotion recognition algorithm.
  • Employing OpenCV python code to generate certain film imagery including face morphing and swapping.
Made with Pingendo Free  Pingendo logo