August – November 2021
Instead of creating one single linear narrative, AltArt wanted to create a forking, interactive narrative field, one that can be reconfigured infinitely by the user to create audio storylines.
After we collected the story database from What Connects Us, we realized 2 things:
1. that we were still missing the voice of the most vulnerable community (Rampa, 200 people living virtually on the garbage dump)
2, that we need a simple method to give them back their voices using the simplest possible interface.
In terms of 1, we worked with 2 facilitators from Association for Inclusion, Transformation and Social Innovation who have been working with the Rampa community for more than 12 years. They managed to create a very valuable “semantic” map of community relations.
In terms of 2, we decided to develop an AI driven augmented reality interface with which users can explore the story database.
The database consists of
-people nodes (people speaking about something)
-topic nodes (topics people are speaking about)
-links (between people and topics, e.g. when somebody is speaking about something, we create a link)
The interface works with face and hand detection, driven by an AI developed by Google.
When the users raise their hands, on one hand you would have nodes.
As you approach your index finger to those nodes, you activate the recordings.
The interface was coded in Python. The full source code is available for free.
Use and Display
We bought a 1.5m diameter TV and displayed it in Coastei, Rampa and Dallas communities, as well as in Transylvania College (a private school in Cluj). Despite the bad weather, it worked well even on the outside, attracting mostly children and youth who could create story lines with their bare hands, intuitively, no other technology used.