People or programs?
04 January 24
By Your Life Choices - Do you trust the algorithm?
Computers continue to defy expectations about their capabilities, and it is happening at a much faster rate than most people would have expected.
It wasn’t that long ago that people were saying that it would be impossible for a computer to beat a grandmaster at chess; that was until IBM’s Deep Blue defeated Garry Kasparov in 1997.
Since then computers and artificial intelligence have been given the task of detecting diseases, choosing partners and even writing news stories and opinion pieces (this story has been written by a human).
Computers are now taking a further step forward, with new research finding that people are more likely to rely on algorithms than taking advice from fellow humans.
From choosing the next song on your playlist to choosing the right size pants, people are relying more on the advice of algorithms to help make everyday decisions and streamline their lives.
Data scientists at the University of Georgia found that as tasks become more challenging, people are more likely to rely on computer algorithms to help guide their decisions.
‘Algorithms are able to do a huge number of tasks, and the number of tasks that they are able to do is expanding practically every day,’ said the University of Georgia’s Eric Bogert.
‘It seems like there is a bias towards leaning more heavily on algorithms as a task gets harder and that effect is stronger than the bias towards relying on advice from other people.’
The study involved 1500 individuals evaluating photographs and trying to count the number of people in a crowd with supplied suggestions that were generated by a group of other people and suggestions that were generated by an algorithm.
Professor Aaron Schecter, who worked on the study, noted that as the number of people in the photograph expanded, counting became more difficult and people were more likely to follow the suggestion generated by an algorithm rather than count themselves or follow the ‘wisdom of the crowd’.
Prof. Schecter explained that the choice of counting as the trial task was an important one because the number of people in the photo makes the task objectively harder as it increases.
It also is the type of task that lay people expect computers to be good at.
‘This is a task that people perceive that a computer will be good at, even though it might be more subject to bias than counting objects,’ Prof. Schecter said.
‘One of the common problems with AI is when it is used for awarding credit or approving someone for loans. While that is a subjective decision, there are a lot of numbers in there – like income and credit score – so people feel like this is a good job for an algorithm. But we know that dependence leads to discriminatory practices in many cases because of social factors that are not considered.’
Facial recognition and hiring algorithms have come under scrutiny in recent years as well because their use has revealed cultural biases in the way they were built, which can cause inaccuracies when matching faces to identities or screening for qualified job candidates, Prof. Schecter said.
Those biases may not be present in a simple task such as counting, but their presence in other trusted algorithms is a reason why it’s important to understand how people rely on algorithms when making decisions, he added.