We found all kinds of interesting artifacts that shed light on how AI might be woven into daily life by 2027, from neighborhood food-network fridges to prompt starter packs for civic engagement.
It was also a great opportunity to try out the Trend Pack Workbook as a structure for grounding our imagination in the what-could-be and what-ifs of near-future AI scenarios.
Digging around in the archives, I found this project from 2004 called MobileSCOUT. It was a public art project that collected audio narratives of local surroundings, personal rituals, and public sightings using mobile phones. Participants could leave voice messages describing the flora, fauna, or behaviors they observed in their environment, creating a collective audio tapestry of everyday life.
Developed in collaboration with my PDPal friends Scott Paterson and Marina Zurkow, MobileSCOUT leveraged VXML (Voice eXtensible Markup Language) to create a voice-based application that allowed users to interact with the system through voice commands and touch-tone inputs.
This project was an early exploration of using mobile phones as tools for public art and storytelling, predating the smartphone era and the widespread concept of mobile apps. It was an experiment in utilizing the technology available at the time to create a collective audio tapestry of everyday life.