How To Build Responsible AI, Step 4: Transparency
How To Build Responsible AI, Step 4: Transparency
Artificial intelligence is now woven into the fabric of our everyday lives. AI contributes to decisions we make as individuals, organizations, systems and economies—and if we wield it irresponsibly, we can cause serious harm.
I believe that all AI practitioners must integrate responsible AI principles into systems and processes to eliminate bias, reduce risk and align our values with our actions. I have developed a six-point responsible AI framework that any organization can adopt to manage data, algorithms, technology and operations:
1. Accountability
2. Impartiality
3. Resilience
4. Transparency
5. Security
6. Governance
In a series of articles, I am doing a deep dive into each one of these components. In this article, I’ll focus on the fourth element: transparency.
What Is Transparency?
Transparency, also referred to as “explainability,” means that users have a clear view of how data, output and decisions are used and rendered. Transparency aims to answer these important questions:
- Can a human understand what decisions the system is affecting?
- What do you need to do to ensure that either A) you are responsible for these decisions or B) you responsibly take action with the information provided?
Transparency is not meant to be a black box or a locked box; it is meant to be an unlocked, crystal-clear glass box that plainly reveals its contents. If you are doing the right thing, and your math has been validated, verified and accredited, you won’t have any trouble exposing both the strengths and the weaknesses of your AI.
Three Key Features Of Transparency
Transparent AI is defined by three crucial characteristics:
1. Documentation
Documentation captures the traceability—as well as the clarity, consistency and accuracy—of data, algorithms and human factors within AI.
I like to put my own spin on the old saying, “Lies, damned lies and statistics,” adapting it to become, “Lies, damned lies, statistics and data transformations.” You can contort data or numbers in many different ways, but transparency requires you to ask challenging questions about the algorithms and human computer interaction that contribute to decisions:
- How did the data come to the system?
- Where did it come from?
- How was it transformed?
- What was the intent?
- How were the algorithms used to provide a piece of decision-making information?
- How were human factors used to frame the context of a decision?
- Was this information fair, partial and free of bias?
Documentation is often recorded in reports, such as Verification & Validation (V&V), Verification, Validation and Accreditation (VV&A) or Testing & Evaluation (T&E) reports. How is this data operating in a real-world environment? Are the results consistent with expectations? Are they validated by internal, external or independent parties? This type of documentation is required in some organizations, like the Department of Defense, but missing in many AI-first companies.
When I was in the Marine Corps supporting the Pentagon, the North Star I set for every report in the department I led was to include a citation at the bottom of each page citing the model the data came from. And in the back of the report, there was always an appendix referencing the models that were cited and if they were accredited or not. As we work to mature AI in government and in corporate America, decision makers who are receiving this type of information need to push for these answers. Where did this data come from? Where did these courses of action originate? What models are they based upon? Who has tested these models? Are they accredited?
2. Expiry
Data and algorithms, much like the milk or eggs in your refrigerator, need to have set expiration dates. They won’t exactly spoil, but they will lose freshness. You need to establish a mechanism for verifying that the inputs and outputs are still operating as expected.
A best practice I used when accrediting models was to use a model for a maximum of two years. After that, every business unit was required to revisit the model and see if it needed to be refreshed or replaced. How is this model operating? Are there new or better mathematics available that we can use? Does the model have a flaw or bias in it? Is there a cyberthreat associated with this system? We cannot assume a model to be correct until we discover it has been catastrophically wrong—there is no “fix-it-and-forget-it”.
3. Analysis Of Alternatives & Actions
An Analysis of Alternatives (AoA) evaluates the performance, operational effectiveness, suitability and estimated costs of possible alternative systems. I adjusted this model to be an Analysis of Alternatives & Actions (AoA&A), which allows you to look at the full picture when evaluating different courses of action — looking at upstream, downstream and sidestream effects with explainability and feedback.
When and why was that action taken? Which risks or trade-offs were accepted with it? What is the impact on your business units, partners, clients or competitors? What alternative models could you be using? What future benefits or drawbacks do you need to consider?
I recommend using a champion/challenger approach, which is like A/B testing, to experiment with different models and monitor results. Instead of selecting one model and throwing the others away, you keep all models that are potentially valuable and update them with data. At certain times, it's important to put the challenger model into production and take the champion model out. By doing that, you are monitoring the system and analyzing alternatives all the time.
To establish transparency in AI, you have to be disciplined in your documentation, informed about the life cycle of your data and systematic about testing real-time AoA&A. Some AI practitioners might scoff at this approach to transparency because who has the time for this in the world of AI? But I believe that to go slow is to go fast. Just like in a NASCAR race, the crews who invest the time to test their engines and tires in the pit are then able to outpace the competition when they finally hit the track.
Article originally published on Forbes.com.
Get started today
Book a consult to see how your company can use advanced AI to supercharge your data, solve operational hurdles, and unlock new opportunities.