How To Build Responsible AI, Step 2: Impartiality

By
Aaron Burciaga, Co-founder, Chairman & CEO
June 19, 2024
5 min read
Share this post

How To Build Responsible AI, Step 2: Impartiality

As the influence of artificial intelligence grows, it is increasingly vital to design processes and systems to harness AI while counterbalancing risk. Our charge is to eliminate bias, codify objectives and represent values. Responsible AI ensures alignment to our standards spanning data, algorithms, operations, technology and Human Computer Interaction.

The focus of this article, impartiality, is one of six foundational elements of a responsible AI model that I recently defined for organizations to use:

1. Accountability

2. Impartiality

3. Resilience

4. Transparency

5. Security

6. Governance

I am examining the importance of each of these elements in a series of articles. This second component of responsible AI, impartiality, has deep historical roots and myriad future implications that we’ll explore in more detail.

What Is Impartiality?

A common mistake made by data scientists and neighboring technologists is creating frameworks from scratch. We do better with a thorough “literature review” that, if nothing else, informs us on adjacent thinking and the etymology of the words we ought to use. Building on  millennia of human experience, we leverage and enrich the meaning of impartiality. In the responsible AI model presented here, and specifically the critical element of impartiality, there are intentional echoes and references to history, even back to Socrates and Plato. That’s good company as we turn more of our daily operations over to AI, and it will increasingly affect everything from how people are advertised to, how they are considered for employment and how they are represented (or even prosecuted) by the legal system.

One of the sources I found in my research, a paper on the dimensions of judicial impartiality published in the Florida Law Review, helped me refine my definition for AI impartiality and the audiences it serves. The paper outlined three audiences for judicial impartiality, leading to three dimensions of impartiality: “(1) parties and the procedural dimension, (2) the public and the political dimension, and (3) judges and the ethical dimension.”

We can adapt this idea to think about three distinct audiences of AI impartiality:

1. Parties of the processes

2. The public and political dimension

3. Checks and balances and the ethical dimension

AI impartiality means that all audiences, data, inputs, outputs, features and objectives are treated with equality, fairness and justice. AI doesn’t operate independently from humans, so we have to ask ourselves: at what point do we introduce people and their inherent biases to processes or systems? What measures can we take to counteract their influence?

I propose that we break impartiality in AI into three key features — independence, fairness and context.

1. Independence

Impartiality in AI is the objective function of the parties involved and the constraints in the operation — meaning that the algorithms or technology handling the data must have an unprejudiced perspective and no personal interest in the outcome.

In Nick Bostrom’s book, Superintelligence: Paths, Dangers, Strategies, he gives an example of how AI could go awry without independence. If we gave AI robots a specific goal, like producing as many paper clips as possible, they could over-optimize for that goal, to the detriment of humans. If robots advanced to develop their own technologies and factories, they could decide to eliminate humans because they were limiting production, or they might use all available resources to make more paper clips — including the atoms of our human bodies. It’s an extreme scenario, but it does illustrate why independence is necessary.

Early in my career, I was tasked with optimizing the safety stocks of the military, managing how many items, from tanks to medical equipment, we kept in readily available supply. I had to make an algorithm that considered all the functions we wanted to optimize, like readiness and affordability, that operated independently from the interests of any people involved. What was good for individual stakeholders, or programs of record, wasn’t necessarily good for optimizing for national security, global prosperity or even the readiness of the Marine Corps.

2. Fairness

Just as fair and impartial courts are essential to accessible democracy, fair and impartial AI is essential to successful systems. Fairness requires that everyone is treated equally, without bias. And since optics matter, the appearance of fairness is almost as important as the reality. When AI is deployed, is it treating all parties fairly? Is there an audit trail to see where data originated, how it was aggregated and how it is being used?

Personal information about consumers can be used in ways that gradually contribute to inequality. Is it fair for a social network to use your information to forecast your income and target ads on the platform based on what they think you can afford? Is it fair for an African American man living in D.C. and a white woman living in suburban Kansas to receive different ads and recommendations based on assumptions based on race, location or income level? Certainly so, given one context — but perhaps also certainly not, given another context. It may seem silly to care about whether one person gets a Mercedes or a Chevy ad, but these small nudges add up over time. They can actually change buying habits and behavior, perhaps widening economic or social disparities.

3. Context

The final feature of impartiality is the aforementioned context, which can be summed up in two words: It depends. It’s not always clear if AI is impartial or not on the surface; it depends on the context. By establishing checks and balances and having models and algorithms validated, verified and accredited, we can evaluate the context and ask: Why are we doing it that way? Should we be doing it that way?

It might be inappropriate to target social media ads to people based on location in certain circumstances — a get-out-the-vote campaign that only targeted people in more conservative or liberal districts of Virginia, for example — but perfectly reasonable in others — an insurance company soliciting claims from a recent windstorm in Virginia. Context makes a big difference.

Impartiality is a complex element of responsible AI, but it may be the most important. The good news is that we don’t need to define and develop impartiality based on technological whims. We can build on a strong judicial and philosophical precedent that has existed for millennia — ensuring that we are as wise and nuanced in applying impartiality in practical and important applications of AI.

Article originally published on Forbes.com.

Get started today

Book a consult to see how your company can use advanced AI to supercharge your data, solve operational hurdles, and unlock new opportunities.