Rules-as-Code: Encoding Rules in a Government Context — Part 3

I am a developer fellow for Code for Canada, a non-profit organization that works with government agencies to deliver better digital services to the public. I am part of a team of three that is working with Employment and Social Development Canada (ESDC) for a 9-month fellowship. We have been working on a prototype for a tool called a Policy Difference Engine (PDE). The purpose of this tool is to measure how a change to a rule may impact a population. The key component of a PDE is a reliable encoding of the “rule”, which here refers mainly to a piece of government policy/regulation/legislation. In this series, we’ll introduce the concept of Rules-as-Code (RaC), discuss the value behind it, and show you what it looks like in practice using an example from our fellowship.

The first post in the series presented a working definition of RaC. The second post discussed the value of applying RaC to a government context. In this final post we will present a high-level walkthrough of a RaC project that our team participated in during the fellowship.

The RaC Process

Many of the techniques and concepts related to RaC align with some well-known best practices in software development, such as flexible/reusable design and test-driven development. One area where there appears to be extra complexity compared to the typical software development flow is in the collaborative process of converting the rules — written in natural human language — into a machine-friendly version which can be executed as code. Our team was integrated into a large collaborative group for a 3-week sprint to build a RaC prototype system for the Motor Vehicle Regulations (MVOHWR), introduced in the previous section. This final section will summarize some of the processes involved with this sprint that we found to be effective for building the RaC system.

There were primarily two goals for the RaC sprint:

  • Build a reusable RaC system that captures the entire MVOHWR, and expose it as an API

Three weeks is a fairly short timespan to develop two applications with a small team of developers, but it speaks to how crucial the collaborative aspect is, which facilitates the process of breaking down written rules into machine-friendly requirements. While the work of the sprint was three weeks, there was a lot of prior work that went into the preparation, which included aligning all of the different groups involved. During these earlier preparation meetings to discuss the scope of the sprint, we found that there were some technical knowledge gaps. Many of the collaborators on the legal and policy side were not familiar with some important software development concepts that would be prevalent throughout the sprint. This unfamiliarity can understandably create apprehension and lack of clarity around the exact nature and scope of the project. To address this, one of our first activities of the sprint was to create and deliver a short knowledge translation presentation that broadly touched on these topics, giving everyone enough information to speak comfortably about it, and contributing to a common language for us all to communicate in. Some specific areas addressed in the presentation included security, source control, cloud development, and APIs.

Once we were all aligned with the technical terminology, we began the process of converting the regulations to Rules-as-Code. We were largely building off of some previous work done in this space, and also adding in some of our own practices. Below are some activities that were crucial to this process. Most of the collaborative work was done on a shared virtual whiteboard, which proved to be a very valuable tool for the sprint. We will not go into the details of the attached screenshots — they are just meant to give a taste for what these processes look like.

Concept Modelling

We went through the regulation line by line and tried to get a sense of the basic concepts involved, without going too far into the finer details. This involved some informal diagrams on our whiteboard. The main result of this was an increased understanding for everyone involved, and we started to highlight some questions about the rules for further exploration. This helped to identify some of the concrete questions that are answered by the regulations, and would therefore need to be coded into the system.

Cropped screenshot of the decision tree modeling exercise created during the RaC exercise during the Labour sprint.
Cropped screenshot of the decision tree modeling exercise created during the RaC exercise during the Labour sprint.
Decision Tree Modeling Example

Example Walkthroughs

Once we identified the questions that were answered (i.e. how much overtime am I entitled to), we started to walk through some concrete examples. We came up with examples of motor vehicle operator schedules — trying to test typical values and also edge cases. We walked through these very carefully together with the experts. And this is where we began to ask very specific questions, with the goal of figuring out how the language could reliably be translated into code. These examples also served as test cases that were later programmed into the system.

Three individual scenario tables filled out with sample motor vehicle operator schedules based on regulatory calculations
Three individual scenario tables filled out with sample motor vehicle operator schedules based on regulatory calculations
Schedule Scenario Table Examples

Flowcharts

After walking through some examples and getting a better understanding of the concrete flows, we started to capture the precise logic that was being followed in the examples in flowcharts. It was an opportunity to verify that the logic was being applied consistently across all examples. And if it wasn’t being applied consistently, then we want to ensure the difference was being explicitly captured in the regulation. We found that there are multiple ways to represent the flows for these decision-making algorithms. It’s also worth pointing out that these processes were not necessarily sequential. We would go back and forth between doing examples and flowcharts as needed. If there was a conflict in the flowchart process, we would go back to our examples to see what thinking process was used. This may result in refining the examples with some new information, or coming up with other examples to further stress-test the flowcharts.

Cropped screenshot of the flowcharts generated by applying the codified calculations to the Motor Vehicle Operators regulations.
Cropped screenshot of the flowcharts generated by applying the codified calculations to the Motor Vehicle Operators regulations.
Calculation and Decision-making flowcharts

Blackhat session

This was a one-time exercise we did with everyone involved. We had a brainstorming session to come up with scenarios where the regulations could potentially be exploited, whether it be intentional or unintentional. We came up with a variety of personas which made for lots of discussion with the policy experts around if and how these actions are being mitigated. We certainly aren’t able to solve all of the potential cases of misuse, but it’s important to keep conversations like this as part of the ongoing process.

Brainstorming board consisting of three categories: persona brainstorm, persona groups, and results. Each category has digital sticky notes organized by overall theme, which was the result of a cross-collaborative effort.
Brainstorming board consisting of three categories: persona brainstorm, persona groups, and results. Each category has digital sticky notes organized by overall theme, which was the result of a cross-collaborative effort.
Blackhat brainstorming session

Coding and testing

Once the requirements have some level of refinement, the developers are able to iterate on a coded version of them. In addition, we can also write tests for the code we write, which is a great way to verify the level of completion with the policy experts. Once again, this does not necessarily have to be a linear process. We attempted to take a more agile approach, where we would go through some of the above processes to refine the requirements, and then code those requirements in. Throughout the coding and verification process, we would inevitably come up with more edge cases that required further insight. Then we could go back and iterate on the flowcharts and examples to ensure there was a shared understanding and refine the requirements. We were iterating fairly quickly, since it was only a three week sprint, but the process could absolutely be applied to a project with a longer timeline.

Those are some of the valuable processes that were involved in our collaborative RaC sprint. Here are some of the key results of the sprint

  • A Rules as Code engine, exposed as an API, which answered the questions addressed specifically by the MVOHWR

Conclusion

The idea of coding government rules is nothing new. These coded rules will appear in any digital system. “Rules-as-Code” is an extension of the idea of coded rules that includes properties such as transparency, precision, reusability, collaboration, and testability. It may not be the case that all written rules can be immediately converted into a reliable machine-friendly encoding. But the process of subjecting the written rule to a technical lens can at least bring about any sort of ambiguities in the rule that would be blocking that conversion process.

The value of RaC lies in the fact that the regulations and legislation are mired in legalese and it is very difficult to form an accurate interpretation without the input from policy experts. If there are multiple user-facing applications that rely on different interpretations of the rules, then these rules are going to be inconsistently applied. A single RaC engine solves this by acting as that underlying system for any application that needs it. Since it is the definitive source for the rule, it must be accurate. This requires collaboration with the policy and legal experts in creating the requirements.

The process for creating the RaC engine may include some processes such as concept modelling, the creation of flowcharts, example walkthroughs, blackhat sessions, and agile iteration. A very important aspect of this process is the testability of the system. The tests to verify the accuracy of the coded system can be done collaboratively and verified by the policy experts. While a single, definitive rules engine is very ambitious and somewhat speculative at this point, valuable work is already being done not just in Canada, but across the world, in countries such as New Zealand and France.

Developer working in Civic Tech