Today's News Bro

Fresh News, Every Day!

The Benefits and Dangers of Decision-making by Algorithm

The Benefits and Dangers of Decision-making by Algorithm

It’s crystal-clear that decision-making by algorithm can be imperfect. It’s also crystal-clear that decision-making by humans can be pretty imperfect, too. However, the imperfections across these two types of decisions are probably not the same. How and when should society make use of decision-making by algorthm? Cass R. Sunstein provides a thoughtful overview in “The use of algorithms in society” (Review of Austrian Economics, December 2024, 37: 399–420).

Sunstein emphasizes two considerable advantages of decision-making by algorithm: it can avoid bias and noisiness. Bias arises when a certain judge or doctor is overly influenced by certain factors, like whether they saw what seemed like a similar case recently. Noisiness is just the human attribute of inconsistency, where decision rules might be applied different ways if the weather is good or bad, or before or after lunch. (For an overview of decision-making in bail cases, with reference to bias, noisiness, and a possible role for algorithms, a useful starting point is “The US Pretrial System: Balancing Individual Rights and Public Interests,” by Crystal S. Yang and Will Dobbie, in the Fall 2021 issue of the Journal of Economic Perspectives, where I work as Managing Editor.)

Here, I want to focus on some of the reservations that Sunstein summarizes about algorithmic decision-making–some of which might be overcome with better systems over time, some perhaps not.

1) Even if the algorithm does a better job than most humans, some humans will do better than the algorithm. Sunstein writes:

Some important work suggests that while algorithms outperform 90% of human judges in the context of bail decisions, the top 10% of judges outperform algorithms. The reason appears to be that the best judges have and use private information to make better decisions. They consider factors that algorithms do not. They appear to have something like local knowledge – an understanding of the defendant or the circumstances that algorithms lack. We could easily imagine a similar finding for doctors. It is possible that the best doctors know whom to test for heart disease, because they see something, or intuit something, that algorithms do not consider.

2) The gain from using algorithms in many contexts is relatively small in percentage terms, although a small percentage gain applied to a large number of people can certainly be meaningful. Sunstein writes:

As I have noted, algorithms do better than people do, but they do not do spectacularly better. The impressive aggregate figures, in terms of welfare gains, come from the fact that very large populations are involved. If algorithms show a modest percentage increase in accuracy as compared with human beings, we might find seemingly major improvements. If an algorithm can produce a slight increase in the accuracy of screening for heart disease, we might see a significant reduction in deaths. But a slight increase in accuracy remains slight. I have said, for example, that formulas do better than clinicians. But in the median study, formulas are right 73% of the time, while clinicians are right 68% of the time.

3) People are more unforgiving of algorithmic error than of human error. Sunstein:

In short, people are less forgiving of algorithms than they are
of human beings. … Is that rational? If people want to make the correct decision, it is not. If their goal is to make money or to improve their health, they should rely on the better decider. But one more time: if people enjoy making decisions, a preference for making one’s own decisions might be perfectly rational. Perhaps people find the relevant decisions fun to make. Perhaps they like learning. Perhaps decision-making is a kind of game. Perhaps they like the feeling of responsibility. Perhaps they like the actuality of responsibility. If so, algorithm aversion is no mistake at all.

4) Greater complexity will limit the benefit of an algorithm. For example, the algorithm behind a dating app might offer recommendations that are slightly more likely to be successful than those who went unrecommended–but certainly no guarantee of true love. Sunstein tells an interesting story of a prediction competition called the Fragile Families Challenge., which teaches humility about the predictions of algorithms.

There’s a dataset called the Fragile Families and Child Wellbeing Study. It collected data on thousands of families with unmarried parents, where the mother gave birth to a child around the year 2000. The study collected a LOT of data about these families at the time of birth, and then followed up at ages 1, 3, 5, and 9. The challenge was whether this earlier data could be used to predict certain outcomes about the child or the family when the child turned 15. Hundreds of analysts applied to this contest, and 160 teams were selected. As it turned out, many of the predictions were similar to each other, but literally none of them were very accurate. Sunstein writes:

The central question was simple: Which of the 160 teams would make good predictions? The answer is: None of them. True, the machine-learning algorithms were better than random; they were not horrible. But they were not a lot better than random, and for single-event outcomes – such as whether the primary caregiver had been laid off or had been in job training – they were only slightly better than random. The researchers conclude that “low predictive accuracy cannot easily be attributed to the limitations of any particular researcher or approach; hundreds of researchers attempted the task, and none could predict accurately.” Notwithstanding their diverse methods, the 160 teams produced predictions that were pretty close to one another – and not so good. As the researchers put it, “the submissions were much better at predicting each other than at predicting the truth.” A reasonable lesson is that we really do not understand the relationship between where families are in one year and where they will be a few years hence. … You can learn a great deal about where someone now is in life, and still, you might not be able to say very much at all
about specific outcomes in the future.

5) Algorithms are not good, and perhaps cannot be good, at what are sometimes called path-dependent events. The question when a certain political movement will rise or fall, or whether a certain musical act or movie will become popular, will rely on the unfolding of a chain of events. Algorithm rely on patterns from the past events, and may not be good at predicting the timing or probability of future chains of events.

The challenge is consider both the benefits and tradeoffs of algorithms in different settings. If someone I love is facing a bail decision, and I don’t know who the judge will be, I personally would prefer that an algorithm make the decision. When it comes to the algorithms that govern self-driving cars, many people are clearly much less forgiving of errors and accidents caused by an algorithm than they are of errors and accidents of human drivers. In many contexts, from love to future possibilities, the guidance from following an algorithm may be positive, but quite small.

To me, algorithms are always interesting because they specify reasons for an underlying decision. Sometimes the reasons will expose bias and noise in human decision-making; sometimes the algorithm itself can display bias. But when you know the reasons, you can evaluate the decision more clearly.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *