B.C.'s Budget 2026 tells the story plainly: the province's deficit stands at $9.6 billion for 2025-26, projected to climb to a record $13.3 billion the following year. The federal government is navigating its own reckoning – Ottawa's 2025 budget projects a $78.3 billion deficit for 2025-26, alongside plans to reduce the federal public service by roughly 40,000 positions and achieve $60 billion in savings over five years. Two orders of government, two very different fiscal scales, one shared message: the era of expansive public sector growth is over.
The conversation in government executive offices has shifted. It's no longer "you have transformation money, now go spend it." It's "you have the same mandate to serve citizens, with fewer resources to do it." That pressure changes everything – including how we should think about artificial intelligence in government services.
At the same time, a regulatory clock is ticking. The Treasury Board of Canada's Directive on Automated Decision-Making (ADM) has set June 24, 2026 as the compliance deadline for existing automated decision systems developed or procured prior to June 24, 2025.
The directive is clear: federal systems must use artificial intelligence in a manner compatible with core principles of administrative law – transparency, accountability, legality, and procedural fairness – to ensure that decision-making processes are fair and unbiased. Originally introduced in 2019 and significantly updated since, the ADM Directive is a mandatory policy instrument for federal government institutions, requiring those standards whenever automated systems influence decisions about citizens – from benefit eligibility recommendations to application triage tools to proactive outreach algorithms. And while the Directive does not formally bind provincial or municipal governments, there are lessons that all levels of government can use on how to responsibly use AI. Its framework is directly relevant to any government team, at any level, that is building or procuring automated systems today.
The June 2026 deadline isn't a formality. Before deploying any new automated system, federal departments must complete and publish an Algorithmic Impact Assessment – a structured process that examines data quality, bias, human rights implications, and how much human oversight the system warrants. Notably, the impact level isn't handed down from above: each department scores its own system through the AIA questionnaire, and that score determines whether it lands at Level I (low impact) through Level IV (very high impact). The stakes of that self-assessment are real. Deputy Heads must personally authorize Level II and III systems before they go live; Level IV requires Treasury Board approval itself.
The practical difference between levels is significant. A Level I system triaging routine documents operates under far lighter obligations than a Level III or IV system recommending benefit decisions for vulnerable populations – and for good reason: direct human intervention only becomes mandatory at Level III and above. That underlying logic – greater potential harm demands greater scrutiny – isn't unique to Ottawa. It's the same principle embedded in the EU AI Act, the NIST AI Risk Management Framework, and ISO 42001. Any government team, at any level, designing responsible AI governance is likely to arrive at a structurally similar approach.
No policy instrument is perfect. The Directive's own periodic reviews have flagged real limitations: vague standards around what constitutes a "meaningful explanation" to citizens, the absence of independent enforcement, and the fact that impact levels are self-assessed by departments rather than independently verified. The Canada Revenue Agency – one of the largest automated decision-making bodies in the country – is explicitly excluded from the Directive's scope entirely. These are genuine gaps, and a sophisticated public sector team should go in with eyes open. But the underlying framework the Directive establishes – assess before you deploy, match scrutiny to risk, keep a human accountable – reflects sound governance principles that hold regardless of the policy's imperfections.
If your team has been quietly piloting a generative AI tool to help process grant applications or screen documents, the time to formalize that process is now.
The AI conversation in government too often gets pulled toward the shiny, headline-grabbing end of the spectrum: chatbots, large language models, and digital assistants. These tools have their place, but they are not where government AI delivers its most reliable early wins. The lowest-risk, highest-value starting point is high-volume, low-complexity task automation – the kind that reduces administrative burden without ever touching a high-stakes decision.
Consider two examples we have seen in existing government applications:
1) Grant application triage Program offices regularly receive hundreds of applications during intake windows. Most of the first-pass review involves checking for completeness — are required fields filled in, are mandatory documents attached, does the application meet basic eligibility criteria? An automated triage system can flag incomplete submissions for follow-up before staff invest hours in a full review.
This is measurable, explainable, and low-risk. It saves time, and citizens benefit from faster processing. A system like this sits comfortably at a lower AIA impact level, and with the right documentation and human oversight, it can be compliant and operational well before the deadline.
2) Proactive service outreach Thousands of eligible citizens never access programs because they don't know they qualify. A rules-based system that identifies potential eligibility based on existing data and triggers an outreach communication is not a "black box" – it's applied logic, documented and auditable. It also happens to be deeply aligned with the public service value of serving all Canadians equitably. Service Canada already does a version of this at scale: since 2013, it has used CRA tax data to automatically identify seniors eligible for OAS and GIS, sending a notification letter the month after they turn 64 – no application required. That program is precisely the kind of responsible, rules-based outreach automation that demonstrates what good looks like.
These are not moonshots, but the kinds of projects that demonstrate immediate productivity gains and build the internal confidence your organization needs before tackling more complex AI implementations.
There's a temptation, especially under today's fiscal pressure, to view compliance requirements as obstacles. The Algorithmic Impact Assessment is worth reframing. It is not a bureaucratic tax on innovation, but a structured thinking exercise that forces your team to ask the questions you should be asking anyway.
Does the data we're using reflect the population we're serving? What happens when the system gets it wrong? Who is the accountable human when a citizen challenges a decision? What does "explainability" look like for the person on the other side of this recommendation?
The human-in-the-loop principle is central to the ADM Directive at higher impact levels, and it reflects a broader value of Canadian administrative law: decisions affecting citizens require transparency, accountability, and the ability to challenge an outcome. Embraced in the right way, it has the potential to be the cornerstone of public trust. In our experience, the teams that integrate human review checkpoints into their AI workflows early in the design process end up with better systems – not because they are slower, but because they are designed to catch edge cases, surface anomalies, and maintain accountability through the full decision lifecycle.
A senior program officer who understands why the system flagged an application is far more effective than one who is handed an output they don't understand and can't explain to an applicant.
This is what Button means when we talk about AI for explainability rather than AI as a black box. The goal is not to remove humans from the loop – it is to give them better information, faster, so that their judgment carries more impact.
Every public sector leader we speak with has a version of the same underlying worry: that a poorly implemented AI system will erode the public trust that legitimate digital services depend on. It's a valid concern, and a pattern recognition from years of high-profile failures in public sector technology.
The answer is not to avoid AI, but to build it carefully, document it thoroughly, and be transparent about how it works. Good policy around AI, such as the ADM Directive, is in many ways a trust infrastructure. When citizens know that a system has been assessed for fairness, that a human is accountable for the outcome, and that they have a right to an explanation, they are more likely to engage with digital services rather than resist them. That trust translates directly into program uptake, service efficiency, and the kinds of citizen outcomes that justify public investment even in a tight fiscal environment.
The compliance deadline gives teams a reason to pause, document their current automated systems, assess their risk levels, and make intentional choices about what to build next – and what not to build yet. In a fiscal environment where every dollar needs to demonstrate value, that kind of disciplined focus is an asset.
At Button, we help government teams work through Algorithmic Impact Assessments, design human-in-the-loop workflows, and identify the high-volume, low-risk automation opportunities that deliver real productivity gains without requiring a leap of faith. We have done this work on systems that process thousands of citizen interactions – and the lesson each time is the same: responsible design and smart design are the same thing.
The June 2026 deadline is not a finish line. It is a starting point for doing AI in a way that Canadian public servants and the citizens they serve can actually trust.

Our twice-monthly digital services newsletter. Get our best tips, insights and resources delivered straight to your inbox!
We love to have conversations with decision makers, technology leaders, and product managers from government and industry. If that sounds like you, and you have a digital project you’d like to let us know about, please fill out our contact form.
Our business development team will reach out promptly.
