Choosing an AI model in 2026 is no longer just a technical decision. It’s a governance decision. The ownership structures, safety philosophies, political exposure, and moderation standards of AI providers are now material considerations, especially for organizations in finance, healthcare, education, and public service. This post makes the case for why AI model selection deserves board-level scrutiny, and provides a practical framework for evaluating AI vendor governance.

Most organizations are choosing AI models the way they once chose cloud providers:
- Who’s fastest?
- Who’s cheapest?
- Who benchmarks highest?
That framing is being challenged with current events…
Choosing an AI model in 2026 is not just a technical decision. It is quickly becoming an ethical one.
What You’re Actually Choosing When You Choose an AI Model
Over the past year, the governance posture of AI companies has moved from background signal to front-page reality.
Anthropic built its brand on constitutional AI and safety guardrails. Yet its reported tensions around U.S. Department of Defense relationships remind us that even safety-first companies must navigate ethical pressure.
OpenAI has signalled increasing willingness to work with defense and national security agencies. For some, that reflects maturity and real-world impact. For others, it raises hard questions about neutrality, mission scope, and long-term alignment.
Meanwhile, xAI’s Grok model has faced scrutiny around controversial image generation and moderation decisions, being tightly coupled with ownership under Elon Musk and integration within X.
When governance, platform incentives, and AI infrastructure are intertwined, the product cannot be easily separated from its ecosystem.
None of this is outrage. It is awareness.
AI models are not neutral utilities. They reflect:
- Ownership priorities
- Capital pressure
- Political exposure
- Safety philosophy
- Moderation standards
- Corporate governance
When you choose a model, you are choosing those forces.
Why Benchmarks Aren’t Enough
Performance benchmarks are comforting. They feel objective.
- Model A reasons better.
- Model B is cheaper per million tokens.
- Model C has a larger context window.
But what happens when a safety policy shifts overnight?
When a government contract changes internal priorities?
When ownership changes?
When moderation guidelines evolve?
Most organizations don’t have answers to those questions. They have a service that they are becoming increasingly dependent on, and switching can be expensive.
When “Best Performing” Doesn’t Mean “Best Aligned”
If you operate in finance, healthcare, education, or public service, AI outputs influence real lives. Loan approvals. Medical summaries. Policy drafts. Hiring recommendations.
In those contexts, “best performing” may not mean “best aligned.”
- Sometimes predictability matters more than brilliance.
- Sometimes auditability matters more than creativity.
- Sometimes neutrality matters more than speed.
- And sometimes a slightly less powerful model with clearer governance is the wiser choice.
When I was evaluating AI tools for YMCA Canada, a federation of 37 associations serving communities across the country, benchmarks were only part of the conversation. We were asking: what happens to our data? What are the provider’s content moderation standards when our staff use this with vulnerable populations? What’s the governance structure behind the model, and can we defend that choice to our board and our communities? Those questions shaped our initial AI policy and our decision to pilot with Microsoft Copilot and ChatGPT. The technical evaluation was straightforward. The governance evaluation took far longer, and mattered far more.
Technology Selection Is Now Values Selection
For decades, we could separate infrastructure from ideology. A database engine did not have a worldview.
Foundation models do.
Their guardrails, refusals, tone, and training assumptions are designed. When leaders say, “We’re just choosing the best technology,” they are missing the point.
You are selecting:
- A governance structure
- A capital strategy
- A philosophy of safety
- A risk framework
These deserve board-level scrutiny.
How to Evaluate AI Model Governance
If your organization is selecting or reviewing an AI model, here are the governance questions that should sit alongside the technical evaluation:
- What is the provider’s published safety and moderation policy? Is it documented, versioned, and accessible? How often has it changed in the last 12 months?
- How does ownership structure affect model behaviour? Is the provider publicly traded, venture-backed, or controlled by a single individual? Each creates different incentive pressures on content moderation and safety decisions.
- What is the provider’s track record on policy stability? Have there been sudden changes to content policies, safety guardrails, or terms of service? Stability signals maturity.
- Where does your data go and who can access it? Understand the data retention, training, and access policies. For regulated industries, this is non-negotiable.
- Can you defend this choice to your board and your stakeholders? If a journalist or regulator asked why you chose this specific AI provider, would your answer hold up beyond “it scored highest on benchmarks”?
No vendor will score perfectly on all of these. The point isn’t to find a flawless provider. It’s to make the governance decision consciously rather than by default.
Being Intentional
At the same time, we are seeing the emergence of “AI for Good” organizations: companies explicitly building AI to support social impact, climate action, public service, and responsible development. Initiatives like Change Agent AI and similar mission-driven ventures demonstrate that AI can be aligned not only around profit or power, but around measurable societal benefit.
It is about being intentional.
And are you prepared to defend your AI choice?