-
Management

Addressing the reputational risks of Artificial Intelligence

Alexander Buhmann, Christian Fieseler

In a new paper we lay out the reputational risks related to algorithms and propose principles for addressing them.

Organizations increasingly delegate actions, assessments and decisions traditionally carried out by human beings to computerized algorithms. This causes reputational risks because it is often unclear how these autonomous systems carry out tasks. Reputational concerns fall in three categories.

Evidence concerns

Outcome concerns

Opacity concerns

Algorithms may entail inconclusive, inscrutable and/or misguided evidence. Algorithms may generate unfair, biased, or factually incorrect outcomes. Complex decision-making systems based on algorithms pose fundamental challenges to transparency.

Evidence concerns

Decision-making algorithms are often not transparent about the evidence they use to reach conclusions. By design, they work with probabilities, making “best guesses” based on data that can be biased or flawed.

As an example, consider a system developed by “Aspire Health” (with funding from Google), which is used in medical care to project the success of treatments and the likelihood of patients’ deaths. Such systems use information about medical treatments, diagnoses of particular patients, and comparative patterns of common therapies. But they easily overlook important factors that are not as easily quantifiable, for example a patient’s will to survive, which has proven very important to treatment success.

Outcome concerns

Algorithms can also produce unfair, biased or factually incorrect results. They have, for example, been found to discriminate against certain groups of people. Imagine the consequence of systematic discrimination in an automated welfare system such as the one being developed in the UK. Similarly, many news agencies use news robots to produce, e.g., financial news. Stock market data are automatically translated into text, without requiring human beings to monitor this. Any error in such outputs would obviously raise immense issues for related trades and the responsible news agency.

Opacity concerns

As algorithmic decision-making systems grow more complex, calls for transparency are indeed disappointingly limited and doomed to fail. In part, this is because algorithms are the property of corporations who want to keep and improve their competitive edge and combat user manipulation of their algorithms. More importantly, however, it is because simply seeing mathematical operations does not make them meaningful or comprehensible.

To understand an algorithm is to understand the problem that it helps to solve, not just to study a mechanism and its hardware. This is especially true for machine learning algorithms, which are in large part shaped by the data they use.

Opacity, therefore, does not only result from technical complexity, but also from the fact that these technologies are more than the sum of their parts. The last decade has brought about deep-learning algorithms, with even stronger opacity challenges. Deep-learning algorithms are a set of rules defined not by human programmers but by machine-produced and fluid rules of learning. Practically, these algorithms can only be assessed experimentally, that is, by testing them in action, not by merely looking at their code.

Three principles for addressing the reputational risks of Artificial Intelligence

  1. Include all stakeholders.
    It is essential to include even people who may not be aware that they are suffering negative consequences of algorithmic systems. Creating awareness of the opacity of algorithms is even more important than simply making information about them available. If all affected stakeholders are not able to join a conversation about harmful consequences of an algorithmic system, it will be difficult to safeguard the accountability of these systems and the organizations that employ them.
  2. Empower stakeholders to understand the issues at stake.
    As noted above, having information about an algorithm does not necessarily make its workings understandable. There are, however, some tools that can be used to make decisions of principally inconceivable algorithms more comprehensible. Experiment databases can enable comparisons between algorithms, and simplified models (such as flow charts) can translate algorithmic actions to humans, while at the same time revealing potential ‘unknowns’.
  3. Make sure the debate is continuous.
    A number of news outlets including Buzzfeed and The New York Times share at least part of the data and code they use for data-driven articles. Because of the dynamic nature of complex algorithmic systems, debate must be continuous and not just occur at selected points of time. Rigid certification processes, for example, would not be able to keep up with the speed of change in these systems. Thus forums for algorithmic accountability are likely to become ever more important for contact between organizations and their environments.

 

Source:
Buhmann, A., Paßmann, J. & Fieseler, C. J Bus Ethics (2019). https://doi.org/10.1007/s10551-019-04226-4

Published 25. November 2019

You can also see all news here.