The laws and regulations we rely on are often complex and written to be understood and applied by specialists rather than the general public—or by machines. To automate any of these processes, the rules need to be translated to become machine-consumable. Sometimes, that can result in a mistranslation of the intent. 

But increasingly, governments are looking to develop the machine implementation of rules in conjunction with the original legal or regulatory writing. This puts coders and service designers together with policy people to create Rules As Code (RAC): rules that can be interpreted accurately by computers. RAC has a host of advantages. Laws can be implemented widely, transparently, and swiftly and they can be tested against simulations. 

In many ways this is a benefit for the public—not only are the systems more efficient, but rules designed for machines to follow often have a clarity and simplicity that make them easier for the average person to follow, too. 

But does RAC always serve the public interest? There are some thorny ethical issues in some areas of its application, and it’s crucial for legislators and service designers alike to think through how to navigate those in the most responsible and fair way.

How much human subjectivity do we want?

One core concern is that although subjective interpretations are varied and at times harmful (e.g. racial bias in hiring, justice system, law enforcement, etc), they also allow for responses to a wide variety of cases. One person can encounter another, gain an understanding of their circumstances, and implement the rules in a way that best serves the spirit of the rule, is most fitting to the person, or satisfies both. For RAC to have this same flexibility, it must be able to respond to not only a wide variety of situations but also to never-before-seen situations. While the public will react when an authority stretches the rules vindictively, people also cry out when rigid rules seem to get in the way of compassion (or even just practicality)—from mandatory minimums for minor transgressions to store or airline policies that end up restricting mobility aids. There is a concern that while RAC can gain trust by reducing the former, it could lose trust by increasing the latter. There is also a risk that, if RAC is equipped to respond to new situations, at what point is a computer process being given too much authority to make decisions for people?

As The Conversation (a publication covering academic and social issues) discussed in 2020, there is the question of who is accountable to the public. Is it a key part of people’s rights that a human, judge, jury, case worker, or other official figure can hear them out and interpret how to apply the rules to them (even if interpretations by one judge, for instance, can wildly differ from another judge or from the intent of the lawmaker)? And if RAC are followed in a way that does make a mistake, or that seems to depart from the intent of the law, who is accountable to the people harmed by that?

There are of course ways to address these concerns of impersonal and unadaptive processes by building in “off-ramps” in which certain situations do trigger a case being taken to human decision-makers. But determining which cases those should be is complicated, and just as there are people who feel the RAC-based outcome was unfair to them there will be people who feel that they were better served by an initial “objective” RAC outcome than by a human decision that overrode it. 

RAC is a tool and an approach—not some sort of AI takeover

Something like RAC determining guilt in a trial is clearly fraught, and not likely any time soon. But the dystopian image of a machine deciding someone’s fate perhaps obscures how much the intent of RAC is simply to carry out the if-this-action-then-that-response process that laws already strive to.

A team working with Australia’s New South Wales government on an RAC project wrote on some of these perceptions, saying that RAC is mostly suitable for questions that are already fairly yes-or-no or if-then. The team also clarifies that RAC is not a computer going off on its own interpreting legislation, but an elaborate process of many people working together to think through the legislation and anticipate its many applications. 

In many ways RAC is a reflection of more human thought being put into the rules, not less. It is mostly a particular way for policy developers to work, and to then have their work carried out.

Many of the concerns about RAC are tied mostly to the justice and regulatory systems, where particular decisions are being made that greatly affect people’s lives and liberty. But in its more common applications RAC is being used not to impose the rules on people but to streamline people’s engagement with rules and services. 

Where is RAC most applicable and beneficial?

In practice, RAC can be used to simply improve processes and make life easier for people applying to things such as funding, tax, and granting programs. Those are the areas where RAC is most effective: areas in which dense and extensive policy around eligibility and categorization can be turned into an interface that people can easily understand and navigate without combing through a ton of confusing and often irrelevant information.

In everyday life, there are many cases where a lack of clarity frustrates and disadvantages people, turning them away from digital services and getting in the way of them accessing things they need and deserve. Ethical application of RAC considers not only how to apply RAC most fairly and beneficially, but also where RAC can be most fair and beneficial.

The New South Wales team explains how their work illustrates the value of RAC. They were tasked with making a complex system of incentives for energy reduction navigable for the people and businesses it applies to. By mapping out the vast web of which behaviours by users lead to which incentives received, they were able to create an interface where people can plug in their own situation, or different options they’re considering, and get a quick and clear readout of what they’ll gain. The policy and design team itself can also test out different applications before and during their rollout, and can analyze the information they eventually get from users. 

Again, RAC is mostly about a way for policy teams to work. And just as input-output simulations can help the average person calculate how much of a tax break they’ll get for an energy-efficient reno, these simulations can help policy writers game out all the different things they need to consider before establishing the policy. As well, RAC results in a living and adaptable version of the policies. That means that when a loophole or flaw is found—or there is simply an update—it is also easy to make adjustments that take effect widely and immediately.

RAC still demands serious thought to ethics

Although RAC as an approach is, in its most frequent applications, not as ethically fraught as it might seem from the outside, that does not mean that it lets us relax our ethical lens. Rather, it is a process that demands cross-functional teams whose members all bring their best and are willing to engage rigorously with the issues of how to apply policy and what to do when the simulation and testing process flags that there are loopholes and problems in new—or long-standing—laws. 

There are also still all the ethical considerations that come with designing generally: Is this accessible and offered in enough languages? Are users being consulted regularly? Are the most affected communities being considered and listened to? Is trust being built with them so that new tools are not only easier to use in theory but welcoming to use in practice? What considerations are being given to who will help implement these policies and programs day-to-day? Are they being informed and trained in RAC so they can understand and work with the new tools?

There also remain real RAC-specific ethical issues to work through. When western law and policy interacts with Indigenous law and protocol, how does RAC respectfully account for that different system? And while RAC can be less complicated to incorporate with service delivery, we still have to cross the bridge of applying it to regulation and compliance (The presentation “Machines Are Users Too” by Pia Andrews, a prominent RAC advocate who works in Australian digital government, has a useful breakdown of what services RAC vs regulation/compliance RAC requires). 

But what might be most productive about RAC is that its interactive approach of defining, testing, and refining often forces us to confront these key considerations and flaws early on, and together, rather than years down the line in different officials’ subjective responses to particular cases.

Image Credit: Alfonso Estevez / Midjourney

Subscribe to Button Insight  ✉️

Our twice-monthly digital services newsletter. Get our best tips, insights and resources delivered straight to your inbox!

Get in Touch!

We love to have conversations with decision makers, technology leaders, and product managers from government and industry. If that sounds like you, and you have a digital project you’d like to let us know about, please fill out our contact form.

Alec or Dave from our business development team will reach out promptly.

Alec Wenzowski

President & Founder

Thank you! Your submission has been received!

What to expect

1. We'll get back to you in 1 business day
2. We'll schedule a discovery meeting at your convenience to get to know you
3. We'll listen attentively and see how we can best provide value as a team

Oops! Something went wrong while submitting the form.