AI — The Most Transparent Decision Making
by Tom White, July 20, 2020

transparent decision making
Your doctor calls you into their room after your recent blood test, they simply tell you, “Everything looks fine”. The reaction of most people here is, let their heart rate settle, thank the doctor and be on their way. But instead, consider what would have happened if you had asked:

“Why am I fine?”.

The doctor would have most likely gone on to explain that the blood markers are within the levels that they usually like to see. Imagine you now follow up with a second question.

Why are those levels important?”.

The doctor would then explain how values outside these levels indicate a presence of something malign in people of your age. Once again, imagine you go back for more.

“Why?”.

The doctor then cites a study in the early 90s showed that in 10,000 people blood markers outside these levels indicated a higher chance of something bad.
Now you’re really insolent and go for a fourth.

“Why was that study so conclusive?”.

At this point, your doctor is probably slightly annoyed at questioning their decision making and your appointment is surely not getting bulk billed.

Decision making in humans is a black box. We can reference guidelines, we can reference experts and mentors, we can reference back to what we learnt in school. However many of our decisions are shrouded in intuitions and different experiences that make it difficult for us to precisely pinpoint why we made that decision.

As part of their Read Out Loud series, Andreessen Horowitz revisited a popular 2018 article entitled Why We Shouldn’t Fear the ‘Black Box’ of AI (in Healthcare and Everywhere) [1]. In this article, Professor Vijay Pande argues that the black box of Artificial Intelligence (AI) is less of a black box than human decision making.
Professor Pande says that human thinking is much more of a black box than AI, primarily because human thinking cannot be probed to the same level of detail or accuracy as to why a decision was made. He also argues that AI is more transparent because “Unlike the human mind, A.I. can — and should — be interrogated and interpreted”.

However, what happens when AI exceeds humans capability to comprehend; when our intelligence becomes the limiting factors? How can we see inside a black box that we cannot comprehend? For example, during the Deep Patient research at Mount Sinai Hospital in New York.

“It appears to anticipate the onset of psychiatric disorders like schizophrenia surprisingly well. But since schizophrenia is notoriously difficult for physicians to predict, Dudley wondered how this was possible.” The Dark Secret At the Heart of AI [2].

It is still possible to disaggregate decision making through certain techniques. One example was Google researchers studying image recognition, reversed the algorithm to “modify” the image rather than simply spot the object the AI was looking for. This resulted in the researchers understanding in human terms what the AI was looking at. Other examples include asking AI to highlight specific bits of information.

Professor Pande sees this hidden decision making is actually a huge benefit to AI that will allow us as humans to also challenge our own ways of decision making.
“…a future where A.I. not only augments human intelligence and intuition but also perhaps even sheds light on and redefines what it means to be human in the first place.”

Ultimately AI has the ability to be the most transparent decision making but also to teach us the limitations and flaws in our decision making.

References

[1] Vinjay Pande 2018, Andreesen Horowitz, https://a16z.com/2018/02/28/black-box-problem-ai-healthcare/

[2] Will Knight 2018, MIT Technology Review, https://www.technologyreview.com/2017/04/11/5113/the-dark-secret-at-the-heart-of-ai/