AI Governance & Ethics
Colin Smillie developed and implemented AI governance policy at YMCA Canada, a federated nonprofit with 37 associations and 24,000 employees, during the first wave of enterprise generative AI adoption. His framework focused on enabling access to AI as a disruptive technology while building on existing data governance policies, protecting confidential information, and reviewing internal access controls across platforms like the national intranet and learning management systems before AI tools could be deployed.
An AI governance framework is the set of policies, access controls, oversight structures, and review processes that allow organizations to adopt AI tools responsibly. It defines what AI can access, how staff can use it, what vendors are acceptable, and who is accountable for AI-related decisions. Effective governance enables adoption. It doesn’t block it.
AI governance isn’t about saying no to AI. It’s about saying yes with the right guardrails in place.
Most organizations either block AI entirely or adopt it with no policy at all. Both approaches fail. The first loses the competitive advantage. The second exposes the organization to risks it hasn’t even mapped yet. Effective AI governance sits in the middle, enabling adoption while protecting data, people, and institutional trust.
Published March 2026 | Last reviewed March 2026
Why Do Organizations Need an AI Governance Framework?
When ChatGPT and Microsoft Copilot started appearing inside organizations in 2023, most leadership teams faced the same question: do we allow this or block it?
The organizations that blocked it entirely watched their staff use personal accounts on personal devices anyway, with zero oversight and zero data protection. The organizations that allowed it without policy found staff pasting confidential data into public AI tools within days.
Neither outcome is acceptable. What’s needed is a governance framework that treats AI the same way mature organizations treat any disruptive technology: enable it deliberately, with clear policies, defined boundaries, and ongoing review.
AI governance is not a new discipline. It builds directly on the data governance, privacy, and information security policies that organizations already have. The challenge is extending those policies to cover a new category of tool that processes, summarizes, and generates content in ways that traditional software does not.
How YMCA Canada Built Its AI Governance Framework
At YMCA Canada, I led the development of the organization’s initial AI policy during the rollout of Microsoft Copilot and ChatGPT across the federation. The context made this more complex than a typical enterprise deployment: 37 autonomous associations, each with their own technology infrastructure, risk tolerance, and community context, serving some of Canada’s most vulnerable populations.
Our approach was built on a core principle: enable access to AI as a disruptive technology, but build on existing policies rather than starting from scratch. YMCA Canada already had data governance frameworks, privacy policies, and information classification standards. The AI policy extended these to cover generative AI specifically, rather than creating an entirely separate governance structure.
Protecting Confidential Data
The first priority was ensuring that AI tools could not access confidential data: member information, employee records, financial data, and information about vulnerable populations the YMCA serves. This required more than a usage policy. It required a systematic review of where confidential data lived across the organization’s internal platforms.
Before deploying Microsoft Copilot, we conducted a thorough review of internal access policies across the national intranet (intranet.ymca.ca) and learning management systems. The question wasn’t just “should staff use AI?” It was “if we enable an AI tool that can read internal documents, does our current access control model actually protect what it needs to protect?”
In many cases, internal platforms had access permissions that were sufficient for human users but would become problematic when an AI agent could search, summarize, and surface content across permission boundaries. We had to tighten access controls on several systems before AI tools could be safely deployed, not because the AI was doing anything wrong, but because existing access models weren’t designed for a tool that could aggregate information at that speed and scale.
Enabling YMCA Staff Effectively
The second priority was equally important: YMCA associations needed to enable AI support tools to help their staff work more effectively. In a federation of 24,000 employees, many of them frontline community workers, AI had genuine potential to reduce administrative burden, improve program delivery, and free up time for the human work that defines the YMCA’s mission.
Blocking AI entirely wasn’t just a competitive risk. It was a disservice to the staff who could benefit most. The governance framework had to balance protection with enablement, giving associations the confidence to adopt AI tools while maintaining clear boundaries around data sensitivity and acceptable use.
Preparing for the Future
We also prepared the National Data Portal to serve as a foundation for future AI projects. The principle was straightforward: before you can trust AI with your data, you need to know where your data is, how it’s classified, and who has access to it. The Data Portal gave the federation a centralized view of data assets, a prerequisite for any responsible AI deployment at scale.
The AI policy itself was designed to evolve. Version one covered the immediate risks and enablement decisions. But we built it knowing that the technology, the regulatory landscape, and the organization’s comfort level would all change rapidly. The policy included a review cadence and clear ownership, so it wouldn’t become stale the way many technology policies do.
Why AI Governance Requires Board-Level Engagement
AI governance is not a technology team decision. It’s a board-level conversation, and it needs to be, because the risks and opportunities move faster than any annual review cycle can accommodate.
At YMCA Canada, board consultation wasn’t a formality at the end of the process. It was built into the process from the start. I brought AI governance to the National Board early, framing it not as a technology request but as a strategic risk and opportunity discussion. The board needed to understand what AI could do for the federation, what the risks were to the populations we served, and what governance structures would give them confidence that adoption was happening responsibly.
What made those conversations productive was that YMCA Canada’s board members brought perspectives from their own organizations. Many of them sat on boards or held leadership roles at other large Canadian organizations, including financial institutions, healthcare systems, and educational bodies, that were wrestling with the same questions. Those cross-sector insights were invaluable. A board member who had seen how their bank was approaching AI data governance could challenge our assumptions in ways that the internal technology team never would. Another who led a healthcare organization brought a lens on AI and vulnerable populations that directly shaped our policy.
Peer Consultation Across the Nonprofit Sector
Board engagement was one channel. Peer consultation was another. AI was moving so quickly in 2023-2024 that no single organization had all the answers. I actively consulted with technology leaders at other national nonprofits and federated organizations facing similar challenges, organizations with distributed governance, diverse populations, and the same tension between enabling innovation and protecting trust.
Those conversations shaped our approach in concrete ways. We learned from peers who had moved faster than us and hit unexpected problems: access control gaps, staff pushback, vendor lock-in concerns. We also shared what was working for us, particularly around building AI governance on top of existing data governance frameworks rather than starting from scratch. The nonprofit sector doesn’t compete on technology the way the private sector does, and that openness meant we could learn collectively at a pace none of us could have managed alone.
The Board’s Role Going Forward
The pace of AI development means that board oversight of AI governance can’t be a one-time approval. Boards need to establish an ongoing cadence for AI governance review, not micromanaging implementation, but ensuring that the organization’s AI posture evolves with the technology and regulatory landscape. The questions boards should be asking today are different from the ones they asked a year ago, and they’ll be different again in six months.
Organizations that treat AI governance as a standing board agenda item, rather than a policy they approved once and filed, will be the ones that maintain public trust as AI capability accelerates.
What Does a Practical AI Governance Framework Look Like?
Based on the YMCA Canada experience and ongoing advisory work, here is the framework I use when helping organizations build AI governance:
1. Build on What You Have
Don’t create AI governance from scratch. Extend your existing data governance, privacy, and information security policies. AI is a new tool category, not a new discipline. Your data classification, acceptable use, and privacy frameworks already cover most of the territory. They just need to be updated for how AI accesses and processes information.
2. Audit Access Before Deployment
Before enabling any AI tool that can read internal data, review your access control model. Permissions that work fine for human users may not hold when an AI can search, aggregate, and summarize across your entire document base in seconds. Tighten access controls first, deploy AI second.
3. Enable, Don’t Block
Staff will use AI regardless of your policy. If you block corporate access, they’ll use personal accounts with zero oversight. A governance framework should create a safe, sanctioned path for AI use, with clear boundaries around data sensitivity, acceptable use cases, and prohibited activities.
4. Evaluate Vendor Governance
AI model selection is now a governance decision. Evaluate providers on their safety policies, ownership structure, data handling practices, and policy stability, not just benchmarks and pricing. For a deeper dive on how to approach this evaluation, see Which AI? Where do Ethics fit?, which covers the ownership, capital, and safety philosophy dimensions of model selection.
5. Build Leadership AI Literacy
Boards and executives need to understand AI well enough to ask the right questions, not just approve budgets. Governance fails when decision-makers don’t understand what they’re governing. Invest in AI literacy at the leadership level alongside staff enablement.
6. Design for Evolution
Your first AI policy will be wrong about something. Build in a review cadence, clear ownership, and version control. The technology, regulatory landscape, and your organization’s maturity will all change faster than any static document can accommodate.
Who Needs AI Governance?
Any organization where AI outputs influence real decisions about people, money, services, or public trust requires a governance framework. In practice, that includes:
- Nonprofits and charities serving vulnerable populations, where AI processing of member or client data carries heightened ethical obligations
- Healthcare organizations where AI-generated summaries, recommendations, or triage decisions have direct patient impact
- Educational institutions deploying AI for student assessment, content generation, or administrative decisions
- Public sector and government agencies where AI-assisted policy drafting, service delivery, or decision-making must withstand public scrutiny
- Federated organizations where governance must accommodate autonomous units with different capacities, risk tolerances, and community contexts
If your board is asking questions about AI risk and you don’t have a governance framework to point to, that’s the gap this work addresses.
Frequently Asked Questions
What is the difference between AI governance and AI strategy?
AI strategy defines where and how an organization uses AI to create value. AI governance defines the policies, controls, and oversight structures that ensure AI is adopted responsibly. Strategy answers “what should we do with AI?” Governance answers “how do we do it safely?” You need both. Strategy without governance creates risk, and governance without strategy creates bureaucracy. For more on the strategy side, see our AI Strategy page.
How long does it take to build an AI governance framework?
A functional initial framework can be built in 4-8 weeks if the organization already has data governance and privacy policies to build on. The first version won’t be perfect, and it shouldn’t be. The goal is to establish clear boundaries and review processes quickly enough that staff have sanctioned access to AI tools, rather than working around the organization with personal accounts. Plan for iterative improvement, not a perfect launch.
Does AI governance apply to small nonprofits?
Yes, but the scale is different. A small nonprofit doesn’t need a 50-page policy document. It needs a clear acceptable use policy, a decision on which AI tools are sanctioned, guidance on what data can and cannot be shared with AI, and someone accountable for reviewing that guidance as the technology evolves. The principles are the same. The implementation is lighter.
What should boards ask about AI governance?
Five questions every board should be asking: Do we have a published AI usage policy? What data can AI tools access, and have we audited that access? How are we evaluating AI vendors beyond technical performance? What’s our review cadence for AI governance? And can we defend our AI posture to regulators, media, and the communities we serve?
How do you evaluate an AI vendor’s governance posture?
Look at five dimensions: their published safety and moderation policy (and how often it changes), ownership structure and how it affects moderation decisions, data retention and access policies, track record on policy stability, and whether you can defend the choice to your board and stakeholders. For a detailed framework on this, see Which AI? Where do Ethics fit?
Related Posts
Last updated: March 2026
