
Major AI companies are now consulting religious and spiritual leaders to help shape AI ethics and governance through initiatives like the Faith-AI Covenant. The shift signals a recognition the industry has avoided for years: intelligence is not the same thing as wisdom, no model is truly neutral, and once AI starts shaping human judgment at scale it inherits the obligations societies built religious, educational, and civic institutions to manage. Canada, with its long pluralistic tradition, has a real opening to contribute human-centred AI governance frameworks the rest of the field is missing.
A recent Associated Press article highlighted something I did not expect to see this year: major AI companies are sitting down with religious and spiritual leaders to help shape AI ethics and governance. The initiative, called the “Faith-AI Covenant,” brings together organizations like OpenAI and Anthropic with leaders from Christian, Jewish, Sikh, Hindu, Mormon, Baha’i, and other traditions to talk about the moral direction of AI development. Associated Press article on the Faith-AI Covenant
At first glance, this looks strange. Silicon Valley has spent decades branding itself as a rational, engineering-driven culture obsessed with optimization, data, and scale. But as AI systems get more powerful and more deeply embedded in daily life, the industry is starting to admit something it has avoided for a long time:
Intelligence is not the same thing as wisdom.
And more uncomfortably, technology cannot answer moral questions on its own.
The Shift From Capability to Values
For years, the AI conversation focused almost entirely on model performance, benchmarks, scaling laws, inference costs, safety guardrails, and hallucination reduction. Those things still matter a lot.
But the deeper questions are getting harder to dodge. What values should AI systems reflect? Who gets to decide those values? Can there ever be a culturally neutral AI? And what happens when AI becomes part of education, healthcare, legal systems, emotional support, and even spiritual life itself?
These are not engineering questions anymore. They are human questions, and engineers alone are not equipped to answer them.
That is why companies like Anthropic have reportedly hosted theologians and religious thinkers to talk through AI morality, grief counselling, and human dignity. It is also why the Vatican, Islamic scholars, Jewish organizations, and other faith communities have started publishing their own frameworks for AI ethics and human-centred governance.
The industry is waking up to the fact that AI is not just software infrastructure. It is becoming social infrastructure.
Lessons From Working at the YMCA
Part of the reason this development resonates with me is my years working inside the YMCA movement in Canada.
The YMCA operates in communities filled with people from radically different backgrounds. Different religions, different cultures, different politics, different economic realities, different identities, different life stories. And it still has to function as a community.
That experience teaches you something fast: you do not build strong systems by forcing everyone into identical beliefs. You build them around shared human values. Respect. Safety. Inclusion. Dignity. Community. Trust.
The YMCA was never successful because everyone agreed on everything. It worked because the organization understood that communities are strongest when people can hold different views and still work toward common goals.
AI is entering that same phase right now.
The internet connected people. Social media amplified people. AI is doing something different. It interprets, guides, filters, and shapes information itself. That is a much heavier responsibility, and the industry has not fully reckoned with it.
AI Will Reflect Human Values, Whether We Plan For It Or Not
One of the more important shifts in alignment research is the growing recognition that no model is truly neutral.
Every system reflects training data, reinforcement choices, safety tuning, institutional assumptions, cultural norms, and economic incentives. Even what counts as “harm,” “truth,” “safety,” or “acceptable speech” is a value judgment dressed up as a technical decision.
That leads to a conclusion the industry has been slow to embrace: AI alignment will probably never converge into a single universal standard. We are far more likely to end up with national AI models, culturally aligned models, enterprise-aligned systems, faith-informed AI systems, and politically differentiated models.
In some ways, this is already happening. Pretending otherwise is wishful thinking.
AI as a New Layer of Human Infrastructure
What makes this moment unusual is that AI has moved well past being a productivity tool.
AI systems now act as educational tutors, emotional companions, legal assistants, health advisors, coding collaborators, spiritual discussion partners, and decision support systems. That means they are shaping worldview, behaviour, and trust at scale.
Societies have always built institutions to manage those responsibilities. Schools, governments, legal systems, community organizations, and religious institutions. AI companies are discovering that once your product influences human judgment at scale, you inherit some of those same obligations whether you want them or not.
Regulation will help, but it will not be enough on its own.
The Canadian Opportunity
Canada has a real opening here, and we should take it.
This country has spent generations operating as a pluralistic society built around coexistence across many cultures, languages, and belief systems. That lived experience is directly relevant to AI governance.
Most of the global AI conversation is dominated by Silicon Valley accelerationism, geopolitical competition, model benchmarks, and the compute race. Those things matter. But Canada is well positioned to contribute something the rest of the field is missing: human-centred implementation frameworks that balance innovation with social cohesion and institutional trust.
That sounds abstract today. It will not stay abstract for long. As AI works its way deeper into daily life, the practical question stops being “how powerful is the model” and becomes “how well does society function around it.”
The Bigger Realization
What strikes me most about these recent developments is that the AI industry may be rediscovering something older civilizations figured out a long time ago.
Technology changes what humans can do.
Values determine what humans should do.
Those are not the same conversation, and treating them as if they were is how we end up with powerful systems pointed in the wrong direction.
Frequently Asked Questions
What is the Faith-AI Covenant?
The Faith-AI Covenant is a recent initiative bringing major AI companies, including OpenAI and Anthropic, together with religious and spiritual leaders from Christian, Jewish, Sikh, Hindu, Mormon, Baha’i, and other traditions to discuss the moral direction of AI development. It is one of the first formal attempts to put theologians and faith communities in the same room as the people building frontier AI systems.
Why are AI companies consulting religious leaders?
Because the hardest questions in AI are no longer engineering questions. What values should AI systems reflect, who decides them, and what happens when AI starts shaping education, healthcare, law, emotional support, and spiritual life are moral questions, not technical ones. Religious traditions have been working on questions of human dignity, ethics, and meaning for centuries, and AI companies are realizing that engineering culture alone cannot answer them.
Is AI alignment ever going to converge on a single standard?
Probably not. No model is truly neutral; every system reflects its training data, safety tuning, institutional assumptions, and cultural norms. The most likely outcome is a world of national AI models, culturally aligned models, enterprise-aligned systems, faith-informed AI, and politically differentiated models. In some ways that is already happening, and pretending otherwise is wishful thinking.
What does the YMCA have to do with AI?
The YMCA operates across radically different religions, cultures, politics, and identities and still has to function as a community. The lesson from that work is that strong systems are not built by forcing everyone into identical beliefs but by anchoring them in shared values like respect, safety, inclusion, dignity, community, and trust. AI is now entering the same phase, where it has to serve diverse human communities without flattening them.
Why is Canada well positioned to contribute to AI governance?
Canada has spent generations operating as a pluralistic society built around coexistence across cultures, languages, and belief systems. Most of the global AI conversation is dominated by Silicon Valley accelerationism, geopolitical competition, and the compute race. Canada can offer something that conversation is missing: human-centred implementation frameworks that balance innovation with social cohesion and institutional trust.
Why isn’t regulation enough on its own?
Regulation can set boundaries but cannot define the values inside them. Once an AI product influences human judgment at scale, it inherits obligations that societies have always managed through schools, governments, legal systems, community organizations, and religious institutions. Those are cultural and institutional layers, not legal ones, and they cannot be replaced by a regulation alone.