December 4, 2016
EARLY DAYS:
What It Will Take for Us to Trust AI (Guru Banavar, NOVEMBER 29, 2016, Harvard Business Review)
The early days of artificial intelligence have been met with some very public hand wringing. Well-respected technologists and business leaders have voiced their concerns over the (responsible) development of AI. And Hollywood's appetite for dystopian AI narratives appears to be bottomless.This is not unusual, nor is it unreasonable. Change, technological or otherwise, always excites the imagination. And it often makes us a little uncomfortable.But in my opinion, we have never known a technology with more potential to benefit society than artificial intelligence. We now have AI systems that learn from vast amounts of complex, unstructured information and turn it into actionable insight. It is not unreasonable to expect that within this growing body of digital data -- 2.5 exabytes every day -- lie the secrets to defeating cancer, reversing climate change, or managing the complexity of the global economy.We also expect AI systems to pervasively support the decisions we make in our professional and personal lives in just a few years. In fact, this is already happening in many industries and governments. However, if we are ever to reap the full spectrum of societal and industrial benefits from artificial intelligence, we will first need to trust it.Trust of AI systems will be earned over time, just as in any personal relationship. Put simply, we trust things that behave as we expect them to.
Posted by Orrin Judd at December 4, 2016 6:21 AM
« OF COURSE, THEY ARE THE ONLY ONES WHO THINK HE'S WINNING: |
Main
| HAND-OUT TO HAND-OUT TO-HAND-OUT: »
