IT Professional | 2019

To Err is Human, to Forgive, AI

 
 
 
 

Abstract


Trustworthiness is an elusive quality. We may completely or partially trust relatives, friends, colleagues, or strangers. We also place a great deal of trust in the operators of airplanes, cars,medical prognoses, invasive medical devices, and other complex systems, potentially risking our lives doing so. Similarly, we trust that the designers, builders, testers, operators, and maintainers of these complex systems took great care in ensuring safety and reliability. But no matter what measures are taken to ensure error free operation, we acknowledge a certain level of risk of failure, even catastrophic failure in these systems, because they are built and operated by humans. What about those systems that employ artificial intelligence (AI), such as driverless cars, autopilots, invasive medical devices, and certain types of systems in the internet of things? Do we expect these AI enabled systems to operate in such a way that they can be trusted more than those that are operated only by humans? It seems to be headline news when an AI capable system fails, particularly when the blame can be placed directly on the underlying “intelligence.” But we should not be surprised when AI enabled systems fail and we can prove it to you? In fact, while we strongly advocate for such systems, we think we should insist on an even higher level of professionalism and rigor when developing and deploying AI systems.

Volume 21
Pages 4-7
DOI 10.1109/MITP.2019.2913265
Language English
Journal IT Professional

Full Text