A Lesson from Uber? Maybe

Image
Mike Metz
June 20, 2018

An article in the New York Times on March 20, 2018 by David Leonhardt presents (tangentially, I admit) an interesting issue.[1]  His piece concerns the recent tragedy of a pedestrian being hit by an Uber car in Arizona.  The Uber was operating in “automatic” mode with a driver in back-up command.  Uber halted testing of autonomous driving vehicles until the cause could be determined.  Probably many people will strengthen their opinion that autonomous driving cars are not a good thing despite the expert opinions and mounting data.

Leonhardt brings up an interesting study about human decisions versus algorithm decisions and the faith people put in other people rather than in the algorithm or the machine.  The study was done by three University of Pennsylvania business school professors.[2]  Leonhardt thinks it may end up applying to Uber.  It may also be applicable to hearing devices. Leonhardt states:

When a machine makes an error, human beings are reluctant to use it again, as research by Massey and others has shown. When people make a mistake, they often persuade themselves that they know how to avoid repeating it — even when there is abundant evidence that they don’t, and they will go on repeating it. Sometimes, machines are more reliable than people, but people still insist on being in control.[3]

From the psychology journal abstract:

Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.

Get the idea?  When people face a decision involving trust—which is correct or most “believable”—it may be that people (device users) tend to place blame for failure on the device rather than a person (clinician or patient).  How often do patients say: “The hearing aids worked just fine, the clinician failed to do his/her part.  That’s why I don’t use hearing instruments.”

How many of the failures to use hearing devices can be blamed on the devices not meeting the expectations of the user?  Perhaps that exact question has not been asked of failed users, but one might expect that the answer would fit the findings of Dietvorst, Simmons, and Massey.  That is, most failed users would likely blame the instruments and not the clinician.  “Hearing aids don’t work!”

The lesson here for clinicians is that perhaps some patient conclusions about the failure of hearing devices can be due to the clinician themselves rather than the failure of the device.  Particularly when insufficient data has been obtained to define the hearing loss and verify the aided benefits.  In an effort to “befriend” the patient, to appear more knowledgeable, to empathize, or to otherwise place the entire burden of the fitting on the instrument, we may inadvertently contribute to some of these failures.

Not that all failures can be blamed on any one factor.  But, as seems to be the contemporary consensus, there may be a need to occasionally place data and therapy (or lack of both) as the reasons for failure instead of technology. 

Slick salespersons strive to become trusted so that the failure of their products cannot be blamed on them—a aged sales formula of dubious benefit to the consumer.  Professionals are quick to criticize sales techniques and “pitches”, but it might be suspected that some of these professionals are as guilty of slickness as those who pitch more obviously.  After all, one can always blame the device.

 

 

 

 

[1] David Leonhardt, accessed at [email protected] on 20 March, 2018.

[2] Berkeley J. Dietvorst, Joseph P. Simmons, and Cade Massey.   Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err.  Jour. Experimental Psychology, 2014.

[3]Leonhardt, ibid.

Leave a Reply