This cartoon is about algorithmic transparency. Today, more and more decisions are being made by algorithms. The logic and functioning of these algorithms is increasingly complex and opaque to people. Today, the new buzzwords are “artificial intelligence” and “machine learning.” AI and machine learning represent a number of different but related things, but what they generally share in common are algorithms. As algorithms become more complex and rely on being fed massive quantities of data, it becomes harder and harder to explain their reasoning. This is a big problem because algorithms play a significant role in our lives by making some very important decisions.
In recent years, there have been tremendous advances in artificial intelligence (AI). These rapid technological advances are raising a myriad of ethical issues, and much work remains to be done in thinking through all of these ethical issues.
I am delighted to be interviewing Kurt Long about the topic of AI. Long is the creator and CEO of FairWarning, a cloud-based security provider that provides data protection and governance for electronic health records, Salesforce, Office 365, and many other cloud applications. Long has extensive experience with AI and has thought a lot about its ethical ramifications.
Recently published by Cambridge University Press, Re-Engineering Humanity explores how artificial intelligence, automated decisionmaking, the increasing use of Big Data are shaping the future of humanity. This excellent interdisciplinary book is co-authored by Professors Evan Selinger and Brett Frischmann, and it critically examines three interrelated questions. Under what circumstances can using technology make us more like simple machines than actualized human beings? Why does the diminution of our human potential matter? What will it take to build a high-tech future that human beings can flourish in? This is a book that will make you think about technology in a new and provocative way.