
AI has become a geopolitical narrative battleground, with dark-money campaigns paying influencers thousands per video to shape public perception of a US-China AI Cold War. The deeper problem isn’t foreign influence in any single model. It’s opacity across every AI system, and the fact that compute access, not rhetoric, is the real power layer. Countries that invest in transparent AI, verifiable behavior, and sovereign infrastructure will define the next era of digital trust. The ones that don’t will inherit somebody else’s story.
Artificial intelligence stopped being just a technology story some time ago. It’s a geopolitical one now, and the narrative around it is increasingly coordinated.
What’s emerging is something a lot of people are calling an AI Cold War. Global power is being shaped less by tanks and GDP and more by compute, models, and influence over how people think about all three.
The two obvious players are the United States and China. But every country is getting pulled into the orbit of this competition, Canada included.
The Wired Story: Influence, AI, and Dark Money
A recent WIRED investigation shows how far the narrative game has already evolved.
A dark-money nonprofit called Build American AI is funding influencer campaigns. It’s tied to a $100M+ pro-AI super PAC (Leading the Future) backed by figures across the tech industry. Influencers are getting paid roughly $5,000 per video. The messaging follows a deliberate two-step pattern: first promote American AI innovation, then frame China as a threat.
This isn’t traditional lobbying. It’s narrative engineering at scale.
Influencers blur the line between advertising and belief. Audiences usually don’t know the content is paid. The messaging is dressed up as lifestyle content, not politics. As WIRED puts it, consumers don’t know when the information they’re getting has been bought.
The Rise of AI Narrative Warfare
What WIRED uncovered is one piece of a bigger pattern.
AI has become a national security narrative. Leaders across the U.S. tech ecosystem are openly framing it as existential competition. Palantir’s leadership has been arguing that the U.S. needs to absorb a lot of risk to avoid falling behind China. Investors describe platforms like TikTok as potential tools of manipulation. The framing is intentional, and it shifts AI from “technology innovation” to “strategic dominance.”
Influencers are the new geopolitical channel, and the research backs up why. A large share of users now get their news from creators. Influencers are often more persuasive than state media. Pro-China influencer content has been shown to move favorability numbers in measurable ways. The takeaway is uncomfortable: governments don’t really need propaganda anymore. They need creators.
The AI Cold War framing is real, but it’s incomplete. China has made enormous strides in research output and is closing the gap on quality and speed. The U.S. still dominates frontier models and infrastructure. Both countries are pouring money into compute and chips. Neither side is as far ahead, or as far behind, as the headlines suggest.
A Zeever Perspective: The Reality Is More Nuanced
From the work I’ve been doing on Zeever.ca, a few things stand out that don’t fit neatly into the Cold War story.
Chinese models are not obviously “influencing” outputs in the way people assume. In practical testing, Chinese-origin models behave a lot like Western ones. There’s no consistent, obvious political bias in most general tasks. The output patterns tend to align because the underlying training data and research are largely shared. Most modern AI systems are derivatives of global research, not isolated national artifacts.
The real problem isn’t influence. It’s verification. We can’t easily verify training data. We can’t fully audit model alignment. That’s true of Chinese models, American models, and open models alike. The issue isn’t foreign influence specifically. It’s opacity across every AI system we use.
And underneath all of that, compute is the actual power layer. Narratives are loud, but the math is simple: compute equals capability. Training frontier models requires massive GPU clusters. Inference at scale requires sustained infrastructure. Access determines who builds, who deploys, and who controls cost and speed. This is exactly where Canada, and a lot of other countries, are falling behind.
Canada: The Third Player Nobody Talks About
Canada has strong foundations. World-class research at Vector Institute and Mila. A real talent pipeline. Early leadership in AI theory going back decades.
What we don’t have is scaled sovereign compute, competitive infrastructure access, or a clear national positioning. Recent federal investments of around $890M signal intent, but the landscape is still fragmented and the strategy is still being written in real time. If Canada doesn’t move faster, we’ll spend the next decade renting capacity from the same two countries that are busy framing each other as threats.
The Bigger Shift: From AI Technology to AI Influence
What’s changing isn’t just who builds AI. It’s who shapes the story about AI.
We’re watching AI development, political funding, social media distribution, and national strategy converge into something new. Call it AI narrative infrastructure. It may end up being as important as compute itself.
The Real Risk, and the Opportunity
The risk isn’t that China influences AI, or that the U.S. influences AI. The risk is that all AI narratives become engineered, and the engineered ones become indistinguishable from reality.
The opportunity is just as clear. Countries that invest in transparent AI systems, verifiable model behavior, sovereign compute, and trusted data pipelines are going to define the next era of digital trust. The ones that don’t are going to inherit somebody else’s story.
Final Thought
We’re not just building AI systems anymore. We’re building narratives, beliefs, and perceptions of reality.
The most powerful AI system in five years probably won’t be the one with the best benchmarks. It’ll be the one that controls the story.
Frequently Asked Questions
What is the AI Cold War?
The AI Cold War is shorthand for the strategic competition between the United States and China over artificial intelligence capability, including frontier models, GPU compute, chip manufacturing, and the public narrative around who is winning. Unlike the original Cold War, this competition is fought through compute clusters, research output, and influence operations rather than military hardware.
Are Chinese AI models actually biased toward Chinese interests?
Not in any consistent or obvious way for general tasks. Chinese-origin models behave a lot like Western ones because the underlying research and training data are largely shared globally. The real concern isn’t foreign influence in a specific model. It’s that no AI system, regardless of origin, can be fully verified or audited end to end.
Why does compute matter so much in the AI race?
Training frontier AI models requires massive GPU clusters that few organizations can afford. Inference at scale requires sustained infrastructure access. Whoever controls compute controls who builds, who deploys, and who can afford to participate. Narratives are loud, but compute is the underlying power layer.
How does Canada fit into the global AI race?
Canada has world-class research at Vector Institute and Mila, a strong talent pipeline, and historic leadership in AI theory. What it lacks is scaled sovereign compute, transparent infrastructure access, and a clear national positioning. Recent federal investments of around $890M are a start, but the strategy is still fragmented.
What’s the real risk of AI narrative warfare?
The risk isn’t that one country influences AI more than another. It’s that all AI narratives become engineered, and the engineered ones become indistinguishable from reality. When dark-money campaigns can pay influencers $5,000 per video to shape what audiences believe about AI, the line between belief and advertising disappears.
Sources
- WIRED: A Dark-Money Campaign Is Paying Influencers to Frame Chinese AI as a Threat
- Axios: U.S. must “absorb a lot of risk” in AI race, says Palantir’s Karp
- Business Insider: An OpenAI investor on TikTok as influence infrastructure
- arXiv: Foreign influencer operations on TikTok and U.S. perceptions of China
- arXiv: Has China caught up to the US in AI research?
- Wikipedia: Artificial Intelligence Cold War