1/ There is a growing, stubborn, highly competent community of people who are building AI stacks to run at home.
2/ Local Language Models (LLMs) are just the beginning. Vision models, audio models, multi-modal models, and more are being adapted to run on consumer hardware, sometimes even on quite modest setups.
3/ (Like..a guy at my local AI meetup built a remarkably adept image generation system on a $500 laptop he bought at Best Buy. It had (if memory serves..) 6GB of VRAM, which is quite tight. It was used in a museum installation where visitor (kids and such) drew pictures on a piece of paper, and then his system took a look at them and gave them a bit of time-based life, animating them in unexpected and delightful ways.)
4/ So we’re seeing a community of practice that wants AI not in the cloud. Not as a subscription. Not as a black box with a terms of service.
5/ They want AI on their own hardware, with open models, modified models, quantized models, fine tuned models, and all the improvised techniques required to make “good enough” feel like a kind of victory.
6/ (I was kinda amazed that I could do this, when I first did this: I had a beefy desktop machine I had been using for 3D rendering and modeling that had been sitting idle after I sold my last company..and then a friend pointed out to me that it was worth more than I had paid for it because it had a rare and powerful GPU that was in high demand for AI model inference. ComfyUI, HuggingFace, and a few other tools later, I was running local models that were doing image generation and text inference that I had previously been paying for in the cloud.)
7/ You can call it a hobby, but that would miss the point. It is a culture of practice with a clear ethic: privacy, ownership, self agency, and a preference for interdependence over dependency. It is also, plainly, an appetite for control. For some, it is control in the service of dignity and safety. For others, it is control in the service of more questionable desires. Both can be true. Both are part of the signal.
8/ What matters is that this is an aperture. A small opening through which a different AI future becomes visible.
9/ The dominant story of AI today is scale. Massive models, massive compute, massive centralization, and a marketplace defined by a few vendors. The local model movement pushes back with a different story: small scale competence, situated use, and community supported infrastructure.
10/ A kind of “farm-to-table” AI. (Or bespoke, artisanal, hand-crafted, heirloom AI, blended, curated, small batch, etc etc etc…you get the idea.)
11/ In “trend language”, this is not just “people running models locally.” There’s something more profound happening here.
12/ This is a signal of subtle shifts in the social contract around AI taht come from multiple directions: decimated trust in centralized platforms, growing concerns about privacy and data sovereignty, the rise of unprecedented compute capacity of off-the-shelf edge computing, a beleagured 99.9% who is tired of being bandied about by the 4th wave algoracrats, and a cultural pushback against the commodification of intelligence. As well as things like:
13/ A shift from platform to appliance.
14/ A shift from rented intelligence to owned capability.
15/ A shift from centralized norms to local values.
16/ A shift from convenience as the only virtue to autonomy as a competitive feature: choose your own trade-offs between performance, privacy, cost, and control.
17/ It is also a strategic quandry: do you bet that the future of AI will not be one singular machine, but many partial machines, tuned to context, tuned to taste, tuned to constraints.
18/ So…why does this matter?
19/ Trends are rarely announced. They arrive as practices before they arrive as markets. You have to be scanning the periphery to catch them early. Where are the weak signals that point to a different future? And even once you see them, how do you represent them in a way that makes them feel real?
20/ This is where Design Fiction becomes not only practical, but a valuable strategic tool.
21/ If you are doing futures work in an organization, it is easy to treat these kinds of signals as a footnote. Interesting, niche, maybe relevant to a small segment of technical users. But signals like this tend to be early indicators of deeper questions:
22/ But here I have a penchant for not just writing reports. I like to create artifacts that make a possible world discussable, not by explaining it, but by taking a team on an expeditionary field trip to the near future in which these signals have become as ordinary as breakfast cereal and as confusing as a flat white. (Which, to me at least, used to be confusing enough to ask for clarification…but I digress.)
23/ So..how to represent these signals without flattening into a paragraph in a report or a graph line and statistics on something like..“frequency of mentions”?
24/ These are the familiar ways: write a report, map the ecosystem, summarize motivations, track adoption, build a forecast, add recommendations. Good stuff, but not good enough to make for an immersive assessment.
24/ There is another method: treat the signal as a destination, then bring back evidence.
25/ This is where Design Fiction becomes practical: tangible artifacts that make a possible world discussable, not by explaining it, but by letting someone handle it.
26/ If the local model movement becomes mainstream, it will not arrive as a single headline. It will arrive as ordinary objects and services:
27/ Perhaps a repair tag for a home inference box sold at a neighborhood repair shop indicating it has been upgraded to support the latest quantized models.
28/ A list of meetup events at a local car dealership that has opened its space for tutorials and workshops on setting up local LLMs. (As my local Rivian dealership has done.)
29/ A local coffee shop sign that reads: “Dogs welcome. Local inference only after 5pm.”
30/ These are not props. They are compression algorithms for sense-making. They let a team feel the implications without needing a thesis defense. They are generative: they invite questions, speculation, exploration, and, importantly, prototyping of new services and products that might emerge in such a world.
31/ So why do I mention all of this?
32/ This is what we’re going to go through in General Seminar, Season 7, Episode 1.
33/ If you want to learn how to translate weak signals like this into tangible evidence that anchors strategic conversation, I am teaching exactly that.
34/ In the Season 7 opener of General Seminar, we will take a present day tranche of research, the updated AI 2027 Futures Model, and we will turn it into artifacts. Not just scenarios. Not just slides. Things you can point to, argue from, and backcast with.
35/ If that sounds useful, grab your ticket now.
See Also