
Václav Havel argued that systems persist because people participate in them, often without realizing it. Applied to AI, we are quietly adopting tools we do not control, accepting outputs we cannot explain, and wrapping governance around black boxes. “AI for All” only matters if it means participation, not just access. Canada’s real opportunity isn’t to outscale the US or China, it’s to define the governance, transparency, and trust frameworks that turn AI from something delivered to us into something we shape.
A recent iPolitics piece on progressive-left outlets sounding the alarm over Carney’s “technological utopianism” pushed me back into a thread I’ve been pulling on for a while: what Václav Havel would actually say about all this.
I didn’t come to Havel through philosophy or political theory. I came to him sideways, after hearing Mark Carney reference him in a Davos speech. That sent me down the rabbit hole to The Power of the Powerless. Once you read it (It’s a big paper at 180 pages, I think I’d skip his political theories living within the lie concept), it’s hard not to see many things through Havel’s lens. AI included.
Havel’s Core Idea Still Holds
Havel’s argument is deceptively simple. Systems don’t sustain themselves through force alone. They persist because people participate in them.
People comply. People adapt. People internalize the system’s expectations. And most importantly, people learn to operate within constraints they didn’t choose.
The AI Version of “Living Within the Lie”
We’re not putting slogans in shop windows anymore. But we are adopting AI tools we don’t fully understand, accepting outputs we can’t fully explain, and shaping our workflows around systems we don’t control.
We’re handed powerful models, mostly from large American tech companies, and asked to trust them, govern them, and align them to our values. All while they remain, fundamentally, black boxes.
This is a new kind of compliance. Not forced. Not even visible. But real.
Why “AI for All” Actually Matters
This is where Carney’s framing of AI for All deserves more credit than it’s getting, even as the backlash gathers steam.
At face value, it can sound like policy optimism, vague accessibility rhetoric, or another “technology will save us” narrative. The progressive critique is fair on those grounds. But viewed through Havel, it signals something more important: a shift away from passive adoption toward shared agency.
If AI remains concentrated, opaque, and externally controlled, we’re effectively outsourcing not just computation, but judgment, language, and decision-making frameworks. That’s not just a technology risk. It’s a sovereignty and accountability problem.
The Real Tension
Most organizations are in a strange position right now. We rely on AI systems we didn’t build. We don’t fully understand how they work. We attempt to “align” them after the fact, and we integrate them into core business processes anyway.
We’re trying to wrap governance around something we don’t control. That’s not sustainable.
Havel Wouldn’t Reject AI. He’d Reframe It.
Havel wasn’t anti-system. He was anti-unquestioned systems.
Applied to AI, the issue isn’t using AI. The issue is using it without agency, transparency, or input. In his terms, the risk is drifting into a new version of “living within the lie.” Accepting outputs, structures, and decisions because that’s just how the system works.
A More Constructive Path
If AI is going to be a force for good, we need to shift from consumption to participation.
That looks like greater transparency into models and outputs. More open and inspectable systems. Stronger evaluation and trust frameworks. National and organizational input into how AI is developed and deployed.
This isn’t about rejecting global AI leaders. It’s about not being entirely dependent on them.
Why This Is a Canadian Opportunity
Canada has a real opportunity here. Not to outspend the U.S. or outscale Big Tech, but to define governance models, build trust frameworks, invest in accessible infrastructure, and make sure AI reflects Canadian values and priorities.
Even incremental progress matters. Every step toward visibility, accountability, and shared control is a step away from passive compliance.
From Black Boxes to Shared Systems
Right now, we’re buying AI, integrating AI, and managing AI. But we’re not meaningfully shaping AI.
That’s the shift. And it doesn’t require perfection. It requires intentionality.
Final Thought
Havel believed that systems begin to change the moment people stop passively participating in them. AI is no different.
If we treat it as something delivered to us, we’ll adapt to it. If we treat it as something we can shape, we’ll influence it.
“AI for All” only matters if it actually means participation, not just access.
The real question isn’t whether AI will shape our systems. It’s whether we’ll have any meaningful role in shaping AI.
Frequently Asked Questions
What is “The Power of the Powerless” by Václav Havel?
It’s a 1978 essay by Czech dissident and later president Václav Havel arguing that authoritarian systems persist not through force but through everyday compliance. Ordinary people sustain the system by going along with rituals and slogans they don’t believe in. Havel called this “living within the lie.” The path out begins when individuals choose to “live in truth” by refusing to participate in those rituals.
What does Havel have to do with AI?
The same dynamic of passive compliance now applies to AI adoption. Organizations are integrating models they don’t fully understand, accepting outputs they can’t audit, and outsourcing decisions to systems controlled by a small number of foreign companies. Havel’s framework helps name what we’re doing and points to the alternative: shared agency over the systems we live inside.
What is “AI for All”?
“AI for All” is shorthand Mark Carney has used to describe broad, equitable access to AI capability. The progressive critique sees it as technological utopianism. Read through Havel, the framing is more interesting: it implies AI as something the public participates in shaping rather than something delivered to them by a handful of platforms.
Why is this specifically a Canadian opportunity?
Canada won’t outspend the United States or outscale Big Tech on raw AI capability. But it can lead on governance, trust frameworks, accessible infrastructure, and ensuring AI reflects Canadian values. Defining how AI is governed and evaluated is a sovereignty layer that doesn’t require matching foreign compute budgets dollar for dollar.
What’s the practical first step toward “shaping” AI rather than just consuming it?
Demand transparency from the systems you use. Ask vendors to explain training data, alignment choices, and failure modes. Build internal evaluation frameworks instead of trusting marketing claims. Support sovereign and open infrastructure where the trade-offs allow. Each of those moves shifts an organization from passive consumer to active participant.