This cartoon about artificial intelligence is based on something I often hear — that it is impossible to understand how certain decisions are made by certain algorithms. I wonder whether this problem is due to the fact that not enough effort is being devoted to addressing ethical issues such as the transparency of the decisionmaking process. It’s easy to say in the abstract that ethics is important. But to truly matter, ethics must be a part of the primary design process, not a secondary consideration. The amount of innovation going into new technology is staggering. Although time and effort are being spent on ethics, far less innovation is going into developing the ethical part of technological design.
In recent years, there have been tremendous advances in artificial intelligence (AI). These rapid technological advances are raising a myriad of ethical issues, and much work remains to be done in thinking through all of these ethical issues.
I am delighted to be interviewing Kurt Long about the topic of AI. Long is the creator and CEO of FairWarning, a cloud-based security provider that provides data protection and governance for electronic health records, Salesforce, Office 365, and many other cloud applications. Long has extensive experience with AI and has thought a lot about its ethical ramifications.
Recently published by Cambridge University Press, Re-Engineering Humanity explores how artificial intelligence, automated decisionmaking, the increasing use of Big Data are shaping the future of humanity. This excellent interdisciplinary book is co-authored by Professors Evan Selinger and Brett Frischmann, and it critically examines three interrelated questions. Under what circumstances can using technology make us more like simple machines than actualized human beings? Why does the diminution of our human potential matter? What will it take to build a high-tech future that human beings can flourish in? This is a book that will make you think about technology in a new and provocative way.