Automation bias

Automation bias – sometimes referred to by other terms such as automation-induced complacency or over-reliance on automation[1] – is the propensity for humans to favor suggestions from automated decision making systems and to ignore contradictory information made without automation, even if it is correct. This bias takes the form of errors of exclusion and inclusion: an automation bias of exclusion takes place when humans rely on an automated system that does not inform them of a problem, while an error of inclusion arises when humans make choices based on incorrect suggestions relayed by automated systems.[2] Automation bias has been examined across many research fields.[1]

Some factors leading to an over-reliance on automation include inexperience in a task (though inexperienced users tend to be most benefited by automated decision support systems), lack of confidence in one’s own abilities, a reflexive trust of the automated system, a lack of readily available alternative information, or as a way of saving time and effort on complex tasks or high workloads.[1][3][4]

Automation bias can be mitigated by the design of automated systems, such as reducing the prominence of the display, decreasing detail or complexity of information displayed, or couching automated assistance as supportive information rather than as directives or commands.[1] Training on an automated system which includes introducing deliberate errors has been shown to be significantly more effective at reducing automation bias than just informing users that errors can occur.[5] However, excessive checking and questioning automated assistance can increase time pressures and complexity of tasks thus reducing the benefits of automated assistance, so design of an automated decision support system can balance positive and negative effects rather than attempt to eliminate negative effects.[3]

See also

References

  1. 1 2 3 4 Goddard, K.; Roudsari, A.; Wyatt, J. C. (2012). "Automation bias: a systematic review of frequency, effect mediators, and mitigators". Journal of the American Medical Informatics Association. 19 (1): 121–127. doi:10.1136/amiajnl-2011-000089. PMC 3240751Freely accessible. PMID 21685142.
  2. Cummings, Mary (2004). "Automation Bias in Intelligent Time Critical Decision Support Systems" (PDF). AIAA 1st Intelligent Systems Technical Conference. doi:10.2514/6.2004-6313. ISBN 978-1-62410-080-2.
  3. 1 2 Alberdi, Eugenio; Strigini, Lorenzo; Povyakalo, Andrey A.; Ayton, Peter (2009). "Why Are People's Decisions Sometimes Worse with Computer Support?". Computer Safety, Reliability, and Security. Lecture Notes in Computer Science. 5775. Springer Berlin Heidelberg. pp. 18–31. doi:10.1007/978-3-642-04468-7_3. ISBN 978-3-642-04467-0.
  4. Goddard, Kate; Roudsari, Abdul; Wyatt, Jeremy C. (2014). "Automation bias: Empirical results assessing influencing factors". International Journal of Medical Informatics. 83 (5): 368–375. doi:10.1016/j.ijmedinf.2014.01.001. PMID 24581700.
  5. Bahner, J. Elin; Hüper, Anke-Dorothea; Manzey, Dietrich (2008). "Misuse of automated decision aids: Complacency, automation bias and the impact of training experience". International Journal of Human-Computer Studies. 66 (9): 688–699. doi:10.1016/j.ijhcs.2008.06.001.

Further reading

This article is issued from Wikipedia - version of the 8/23/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.