PRIVACY + SECURITY BLOG

News, Developments, and Insights

high-tech technology background with eyes on computer display

The Ethics of Artificial Intelligence: An Interview of Kurt Long

In recent years, there have been tremendous advances in artificial intelligence (AI). These rapid technological advances are raising a myriad of ethical issues, and much work remains to be done in thinking through all of these ethical issues.

I am delighted to be interviewing Kurt Long about the topic of AI. Long is the creator and CEO of  FairWarning, a cloud-based security provider that provides data protection and governance for electronic health records, Salesforce, Office 365, and many other cloud applications.  Long has extensive experience with AI and has thought a lot about its ethical ramifications.

Kurt Long

SOLOVE: There is some confusion and disagreement about the definitions of artificial intelligence (AI) as well as machine learning. Please explain how you understand these concepts.

LONG: AI is essentially the science and machine learning is what makes it possible. AI is the broad concept, and machine learning is the application of AI that’s currently in use most widely. Machine learning is one way to “create” artificial intelligence.

Stanford says machine learning is “the science of getting computers to act without being explicitly programmed”.

Here is an interpretation from Intel’s head of machine learning, Nidhi Chappell: “AI is basically the intelligence – how we make machines intelligent, while machine learning is the implementation of the compute methods that support it. The way I think of it is: AI is the science and machine learning is the algorithms that make the machines smarter. So the enabler for AI is machine learning.”

Artificial Intelligence Robot

SOLOVE: What are some of the things that have gone wrong with AI and that could potentially go wrong? 

LONG: While AI has the potential to solve big global challenges, there have been numerous cases where AI has produced troubling results, even if the technology was well intended.

  • Microsoft released an AI chatbot onto Twitter. Engineers programmed the bot to learn from and speak like other users on the social network, but after only 16 hours, the bot was shut down for posting racist tweets (link).
  • Uber’s AI-powered self-driving car failed to recognize six red lights and ran through one of them where pedestrians were present (link).
  • The Houston Independent School District used the EVAAS (Educational Value Added Assessment System) program to rate teachers by comparing student test scores against state averages. The district’s goal was to fire 85% of “low-performing” teachers. The teachers took the case to court (and won) on the basis that the software developer would not disclose how its algorithm worked (link).

Poor data quality poses big risks when it comes to AI. Even if you have a robust amount of data, you can still run into problems if bias is inherent in training sets) or if data is inaccurate. It is up to the human to train the machines with ethics and human alignment to solve challenges.

SOLOVE: Why are ethics necessary for AI?

LONG: AI is a powerful tool that stands to greatly benefit society and our quality of life, but, the intended use of the technology and ethics need to be established before implementing it.

It’s everyone’s obligation to make sure that these technologies are being used to further an ethical goal or objective. The Institute of Electrical and Electronics Engineers (IEEE) addresses the importance of ethics in AI in section 7.8 of their policies, highlighting the need to avoid bias and bribery and to ensure health and safety.

SOLOVE: What types of ethical rules would you recommend?

LONG: In addition to the ethical rules highlighted in IEEE’s policies, there are three principles that can be generally applied when considering the ethical and legal use of AI.

The first one is transparency. There should not be a concept of “black box algorithms” where the machines decisions are only understood by the machine. There needs to be a level of transparency affiliated with machine learning systems. This applies to consent, the intended use of data, the data used to train machines, and how the machine makes decisions (refer to EVAAS).

The second consideration we need to make is whether AI is aligned with values at scale. AI must be aligned with the values of the technology’s recipients and participants, the technology’s vendor and users, and the law. This eliminates the risk of unexpected outcomes or inadvertent results.

Lastly, there should be a human in the loop. These learning systems need to be supervised to ensure that we understand how they are drawing conclusions, and a final determination of actions taken should be made by a human.

Artificial Intelligence Robot Arm

SOLOVE: Are there disagreements about the ethical rules that are needed? Or are you seeing consensus?

LONG: People are becoming more aware of what can be done with their personal data, whether directly or inadvertently, with regulations like GDPR and incidents like Facebook’s potential privacy violations sprouting up.

As we move toward a society that’s more conscious of how these technologies are being used, I think there is a general consensus that these technologies should be used for the good of society, and therefore we need ethical rules for using AI.

That’s not to say that the researchers, think tanks, and thought leaders associated with this movement all agree on exactly what those ethical rules are and how they should be framed, but we are definitely seeing a strong movement toward standards and principles that can be applied globally

SOLOVE: One of the challenges with ethical rules is that they are often voluntary. Some creators of AI technology might not follow them. Should there be laws rather than voluntary ethical rules? Are there any dangers with using laws as opposed to ethical rules to govern AI?

LONG: When it comes to law and the ethics of AI, the two should be commingled. It’s complex since an organization may not intend to use AI maliciously but may still carelessly cause harm due to false data sets or a lack of transparency or human involvement.

So there is still work to be done with determining bad intentions versus willful neglect, but I do believe that laws can – and should — be used to enforce ethical rules

Either way, vendors and consumers alike should be educated on the considerations of using AI, and they should be held accountable for the outcomes of machine learning. That use should be transparent and legally defensible in court.

SOLOVE: How do we keep AI and machine learning under control when they are continually evolving in ways that are unexpected? 

LONG: This goes back to maintaining transparency and understanding the technologies you are using while keeping a human in the loop. AI is not something that should be unleashed to derive outcomes from whatever it evolves into. AI should be an extension of human work and used to empower this work – not to undermine it or replace humans.

Thanks, Kurt, for discussing this topic with me.

If you liked this interview, you might be interested in Kurt’s essay, Aligning Your Healthcare Organization with AMA’s AI Policy Recommendations

* * * *

This post was authored by Professor Daniel J. Solove, who through TeachPrivacy develops computer-based privacy and data security training. He also posts at his blog at LinkedIn, which has more than 1 million followers.

Professor Solove is the organizer, along with Paul Schwartz, of the Privacy + Security Forum (Oct. 3-5, 2018 in Washington, DC), an annual event designed for seasoned professionals. 

This post was originally posted on LinkedIn.

NEWSLETTER: Subscribe to Professor Solove’s free newsletter
TWITTER: Follow Professor Solove on Twitter.

The Privacy+Security Forum (Oct 3-5, 2018 in DC)


GDPR and Privacy Awareness Training by Prof. Solove

Click here to see our new course catalog.