1) Sean Park shared four awesomely produced and considered Design Fictions he built for a university GenAI strategy retreat: “Adversarial Ethics” (students as red-teamers), “Sovereign Stack” (on-prem LLM infrastructure), “Faraday Classroom” (no-network “clean room” for human thought), and “Department of Romance” (keeping sociology afloat via AI romance). The move was to turn generalized anxiety into a concrete menu of experiments and tensions leaders can actually discuss.

Screenshot of a Zoom call with multiple participants during Office Hours N°298

2) A practical framing tweak landed: treat each fiction as an H3 “future-normal” end state, then backcast what H1 looks like today and what H2 transitional forms would plausibly appear next. That helps the room talk in moves and tests, not predictions.

3) The “permanent tutor” thread got linked to Neal Stephenson’s The Diamond Age and its “Illustrated Primer” as a concrete pop-culture anchor for what personalized, always-on pedagogy could feel like. It also sharpened the question: tutor for whose values, and to what end?

4) Debate on Team Human vs Team AI for Design Fiction: AI can be a detail-expander and ingredient generator, but people worried it quietly outsources the thinking and makes curation the hard part. “Low-background literature” (vs low-background steel) was the metaphor for why provenance and “someone cared enough to make this” still matters.

5) A concrete “make chatbots more critical” reference was shared as a way to push against flattering compliance: Maggie Appleton’s “AI Chatbots Undermining the Enlightenment”. The point wasn’t “less AI,” but better defaults that challenge you and force sharper questions.

6) Sandra demoed a pre-meeting synthesis workflow: everyone submits a ranked list of priorities, then semantic clustering plus Rank-Biased Overlap highlights alignment, factions, and outliers for the facilitator (RBO (rank similarity) paper). The promise: meetings where quieter signals don’t get steamrolled by whoever can talk the longest.

7) The caution was immediate: any “alignment surfacing” tool can become coercion, surveillance, or consensus theater if incentives and governance are wrong. That’s why “bring your values into the room” wasn’t treated as fluff, but as an explicit design requirement.

8) A related riff: instead of precomputing consensus, imagine an AI meeting-moderation agent that enforces airtime fairness, de-escalates dynamics, and only “unlocks the door” when the group genuinely converges (“Robot’s Rules of Order”). Fun as a conceit, but it forced the real question: who owns the agent, and who benefits?

9) “Context of control” was a clean ethics frame: a personal calorie tracker is one thing; a health insurer altering premiums from the same data is a totally different moral object, even if the UI looks identical. This kept the conversation grounded in power, not just model behavior.

10) Org culture and decision norms vary wildly, so any meeting intelligence needs knobs for culture, not a one-size “best practice” (leadership styles around the world; 2×2 decision-making by culture diagram). The subtext: “structurelessness” is still a structure, usually the worst kind (Jo Freeman on the tyranny of structurelessness).

11) Design fiction methods got broadened beyond static artifacts: diegetic prototypes and role-played worlds can make mechanisms felt in the room, not just read on a slide (diegetic prototypes in Her; “When Black Ships Bring the Future”). For prompt scaffolding, the dice-based Design Fiction d666 Engine was pointed to as a way to generate constraints before reaching for an LLM.

12) Institutional knowledge “as a product” surfaced via 99Ravens (hire experts, license their AI), which kicked off questions about ownership, billing, and whether co-op style models change the incentives. It looped back to governance who gets to “own” the synthetic voice of an organization?

Join nearly 21,000 members connecting art, product, design, technology, and futures.
Loading...