PRIVACY + SECURITY BLOG

News, Developments, and Insights

high-tech technology background with eyes on computer display

Cartoon: Facial Recognition

Cartoon Facial Recognition -- TeachPrivacy Privacy Training 02 small

Facial recognition technology involves using algorithms to identify people based on their faces. Distinctive details about people’s faces are compiled into “face templates,” which are then stored in a database and used to find facial matches,

Facial recognition is quickly being deployed by many companies for various purposes, such as authenticating identity (unlocking smart phones) and identifying people in photos.  Other uses include using the data to track people’s location and behavior.  Facial recognition technology also can detect people’s emotions – an ability that could be used to manipulate people.

Continue Reading

Cartoon: The Travails of CCPA Compliance

Cartoon CCPA Sisyphus 04

This cartoon depicts the travails of complying with the CCPA as it rapidly evolves.  The CCPA originated when a referendum regarding consumer privacy rights was scheduled to be on the ballot in November 2018.  Alastair Mactaggart, the referendum’s sponsor, offered to withdraw it if California passed a law.  So, in the summer of 2018, the California legislature passed the CCPA in an all-out dash to beat the deadline for the referendum’s withdrawal

Businesses scrambled to get ready to comply for the CCPA’s effective date – January 2020.  Being ready to comply with the CCPA requires quite a lot of work.  Further complicating compliance, the CCPA is riddled with ambiguities and difficult tradeoffs between privacy and data security.

Continue Reading

Cartoon: Social Media

Cartoon Social Media - TeachPrivacy Privacy Training 02 small

It is hard to imagine a world without social media. People are increasingly relying on social media to maintain friendships, share photos and happenings with family, and keep current with the news.  But there’s a dark side – more superficial relationships, cyberbullying, harassment, hate speech, and manipulation. Social media has become a cesspool of lies and misinformation campaigns, a place where radicalized hate groups can spread their venom, recruit more members, and rally their followers to attack.

Several prominent social media sites are struggling to figure out what to do. In the early days of the commercial Internet (mid 1990s through early 2000s), idealists pushed a vision of the Internet as a free speech zone. Bad speech would be countered and beaten by good speech, lies would be defeated by truth, and freedom and happiness would reign.  Platforms could just remain neutral and rarely intervene.  They could mainly let the battles be fought, with the faith that eventually the forces of good would win out over the forces of evil.

But this view is naive. We have seen in the past 10-15 years that lies, hate, harassment, defamation, invasion of privacy, and many other social ills are festering online. Social media platforms must wake up and realize that the earlier idealism isn’t the direction reality is taking us. A position of neutrality isn’t appropriate. Platforms must intervene more; they must govern.

Social media platforms currently lack much experience and skill with governance. They don’t have enough personnel who have the background to formulate wise rules, procedures, and due process.  But the call for platforms to govern is increasing in volume, and they can’t keep avoiding it. This is why social media companies should start hiring more people in the humanities, who often have a background in thinking about complicated moral and philosophical issues.

Continue Reading

Cartoon: Data Use and Transparency

Cartoon Data Use and Transparency - TeachPrivacy Privacy Training 02 small

Wouldn’t it be nice if companies were completely transparent in their privacy notices?  Typically, privacy notices are filled with long clunky prose that manages to say hardly anything meaningful to consumers.  These notices are written by lawyers who carefully craft every sentence so that they won’t pin down a company.  The drafters of privacy notices do this because it is difficult to anticipate all the uses of personal data that might be fruitful in the future.  Companies want to avoid making promises that are too limiting of how they might use personal data.  This could tie their hands in the future, making them less nimble in the dynamic and fast-paced world of business in the digital age.

From a business standpoint, having greater room to use personal data in different ways is a great benefit.  From a consumer standpoint, consumers are not adequately informed about how their data is being used.

Additionally, companies often have many different things going on with personal data, and there frequently isn’t a strong enough central command structure to oversee everything that’s happening.  Companies aren’t evil in all of this, but the interests of companies and those of consumers are often not fully aligned.

Continue Reading

Cartoon: Algorithmic Transparency

Cartoon Algorithmic Transparency - TeachPrivacy Privacy Training 02

This cartoon is about algorithmic transparency. Today, more and more decisions are being made by algorithms.  The logic and functioning of these algorithms is increasingly complex and opaque to people. Today, the new buzzwords are “artificial intelligence” and “machine learning.”  AI and machine learning represent a number of different but related things, but what they generally share in common are algorithms.  As algorithms become more complex and rely on being fed massive quantities of data, it becomes harder and harder to explain their reasoning.  This is a big problem because algorithms play a significant role in our lives by making some very important decisions.

Continue Reading