PRIVACY + SECURITY BLOG

News, Developments, and Insights

high-tech technology background with eyes on computer display

Re engineering Humanity

Recently published by Cambridge University Press, Re-Engineering Humanity explores how artificial intelligence, automated decisionmaking, the increasing use of Big Data are shaping the future of humanity. This excellent interdisciplinary book is co-authored by Professors Evan Selinger and Brett Frischmann, and it critically examines three interrelated questions. Under what circumstances can using technology make us more like simple machines than actualized human beings? Why does the diminution of our human potential matter? What will it take to build a high-tech future that human beings can flourish in?  This is a book that will make you think about technology in a new and provocative way.

Re engineering Humanity Book Cover

Below is a short interview with Professor Evan Selinger, who teaches philosophy at Rochester Institute of Technology and has written extensively on technology and ethics, privacy, and expertise. Earlier this month, a book that Prof. Selinger co-edited with Jules Polonetsky and Omer Tene, The Cambridge Companion to Consumer Privacy, was also published by Cambridge University Press.

Evan Selinger

SOLOVE: Why does your book focus on human beings?

SELINGER: Words matter. When you pick a good concept, it provides a helpful perspective for describing, explaining, assessing, and predicting. No concept is perfect. Depending on the context and goal, some concepts are better than others.  When analyzing the impact of technology, it’s sometimes useful to focus on “consumers,” “data subjects,” “end-users,” and “citizens.” Since Prof. Frischmann and I want to connect the social, ethical, and legal dots between topics that have more in common than meets the eye—including GPS navigation, smartphone addiction, one-click shopping, fitness trackers, electronic contracts, social media platforms, targeted advertising, robotic companions, fake news, and autonomous cars—we turned to the most common denominator: human beings. We argue that the technological environments people live, work, and play in impact how they think, perceive, and act, and we call this impactful process the techno-social engineering of humanity. What’s at stake is nothing less than the targeting of our senses, cognition, emotions, preferences, dispositions character, identities, and willpower.

Ultimately, technological habits shape our understanding of what it means to be human: not just who we are, but also who we believe we should strive to become and what world we deem fit for future generations. Prof. Frischmann and I develop our account why being human matters from insights conveyed in the humanities, social sciences, legal studies, computer studies, and natural sciences, like evolutionary biology. Crucially, we’re greatly inspired by the privacy literature that emphasizes the value of human personhood, and, with it, the importance of developing institutions to defend against infringements upon autonomy, dignity, intimacy, trust, socialization, and intellectual development.

Robot

SOLOVE: How do your analysis of humanity relate to concerns about privacy?

SELINGER: The big technology companies are doing their best to seduce the public with promises of easy, frictionless living, all the while throwing the full force of artificial intelligence, machine learning, big data, ubiquitous computing, and weaponizable scientific insights into trying to transform society in a collection of stimulus-response cogs who become ever-more predictable by becoming ever-more programmable. Relatedly, the workplace has people afraid of being replaced by automation or despairing about laboring in jobs where they’re treated like automatons. I don’t know if you saw the recent coverage on how bad it is to work in an Amazon warehouse, but the latest reporting on the heavily micro-managed positions—where algorithms call many of the shots—states that harried fulfillment employees are so afraid of being rebuked for taking breaks that they’re urinating into bottles to avoid spending the longer amount of time it would take to walk to a bathroom. And, finally, policy-makers are clamoring for a world with “smart” technology embedded in everything from smart homes to the smart transportation systems. Such a world is poised to be optimized for a distinctive type of people: folks who are tethered to networked computational technologies that are always-observing, always-analyzing, and always-acting.

One danger that Prof. Frischmann and I emphasize is there might be diminishing opportunities for people to escape from surveillance by going offline or to environments that are designed with gaps and inconsistencies between techno-social systems. As these opportunities shrink, so does the human capacity to experience what privacy scholar and law professor Julie Cohen calls “breathing room.” Breathing room is a form of liberation that, in many ways, overlaps with outcomes that privacy professionals champion when they fight against the detrimental consequences of chilled speech, chilled behavior, and manipulation. Breathing room typically requires special environments that minimize our exposure to being disconcertingly observed, negatively judged, and nudged to behave as others want us to. Once nourished by breathing room, people can feel freed up to question how they’re being programmed—the externally imposed and externally valorized procedures, protocols, and plans that guide their lives—and feel emboldened to experiment with more self-directed decisions.

The spaces where people can look for breathing room today are already being aggressively colonized thanks to the smartphone dependency. People often feel pressured to bring these devices around the house, on trips from here to there, into the classroom, and even to places once deemed prime locations for solitude, like hiking trails. Things might get worse—much worse—when the Internet of Things matures.

SOLOVE: Privacy professionals are looking for ways to improve upon the notice and consent model that is satisfied by standard terms of service contracts. You and your co-author have a new take on the situation. Can you summarize it?

SELINGER: By now, it’s well-understood that the electronic contracting environment isn’t constructed for a true meeting of the minds to take place. Instead, our legal regime allows for the typical digital contracting environment to be designed in ways that make it perfectly reasonable for people to not even bother to try to read the wording found in terms of service. After all, the wording is filled with boilerplate legal jargon that most people find impenetrable; and even if you somehow knew what the terms mean and have time to consider them, you’d still be unable to negotiate with the take it or leave it offers. Consequently, in a spirit of resignation or apathy people habitually go with the flow of TOS user interfaces. See a button on a webpage or app screen that says “I agree.” Click it without giving the matter further thought. Move on with your life until some scandal occurs, like Cambridge Analytica.

Our analysis moves beyond identifying the harms that can occur when consumers are placed in situations where they can’t possibly understand the full costs of entering into an agreement, such as the flow of third party benefits. Indeed, we focus on is something that’s outside the purview of legal scrutiny—the possibility that electronic contracting environments function as tools for conditioning the human mind: the more we enter into these environments, the more accustomed we become to behaving like simple machines that exhibit automatic behavior when presented with triggering stimuli. From this perspective, the fundamental human problem at stake is that our autonomy might be diminishing over time as a result of us being constantly exposed to situations where we should want to think through important things but find meaningful deliberation to be impractical.

We’re not pointing fingers and saying that anyone has maliciously designed the click-to-contract environment to be oppressive. Instead, it seems likely that Taylorist ideas about efficiency-oriented systems have had such a profound impact on web design principles (which include studying eye tracking, click rates, and related time and motion activities) that consumer transaction costs have been thoroughly reduced: extracting maximum contractual enforceability only requires interactions that take minimal effort. Again, see, scroll, then click. The same pattern is exhibited in smartphones and smart TVs. Who knows where the spread of this form of contract creep will end?

Fixing this problem is a Herculean endeavor, to say the least. In the book, Prof. Frischmann and I offer three proposals that introduce friction into the electronic contracting practice. I’ll mention two here. The first has to do with the courts expanding the contract principles of voluntariness and absence of coercion and duress with a deliberation principle that would rule out automatic contracting. The second has to do with reducing the scope of contract law so that it’s concerned with meaningful relationships and less prone to expanding through proliferating contract creep.

SOLOVE: You and your co-author discuss the problem of “techno-social engineering creep.” What is it and how does it relate to the issue of privacy creep?

SELINGER: To understand “techno-social engineering creep,” you need to understand the more basic concept of “function creep.” One way that function creep occurs is when tools that were designed to be used for one purpose take on new functions over time. For example, consider a U.S. driver’s license. It went from being proof that a person could legally drive a car to a valid credential for buying alcohol and getting into age-restricted venues, like nightclubs. After the post 9/11 Real ID Act, Real IDs have become counter-terrorism tools for protecting commercial airlines and federal buildings.

Instead of renewing my standard NY State driver’s license, I just filled out the paperwork for a Real ID so that I could continue to fly after October 1, 2020. I told my father-in-law that he should consider doing this, too, and let him know that he’ll need to show additional proof at the DMV to qualify for one. He thought about the situation and suggested that the time has come to put microchips in citizens so that they could stop worrying about carrying proof of identity documents. He thought the proposal wouldn’t be too hard to implement because people are already used to being tracked through their mobile phones. While there are good reasons to expect that many in fact would resist such a political proposal, my father-in-law said something important. The underlying logic of this view is that once people become accustomed to new normals, proposals for things that once seemed outlandish become easier to implement. The concept of “techno-social engineering creep” helps explain this phenomenon. It’s the idea that technological habits can influence expectations of what the future should look like.

In the book, Prof. Frischmann and I try to explain how people become complacent about surveillance. One of our theses is that seemingly mundane technologies, like fitness trackers help normalize tolerance for surveillance in increasing amounts of situations. For example, when schools introduce fitness trackers into physical education classes to create efficient mechanisms for students to report how much walking or running they’re doing, they also run the risk of reinforcing the message that 24/7 surveillance isn’t just helpful for combatting obesity but is also a process that trusted authorities are comfortable recommending. While such indoctrination isn’t inevitable, the fact remains that tracking devices are integral components of various programs that don’t take active steps to ensure that participants are acquiring anything like privacy literacy.

Techno-social engineering creep is thus an important component of “surveillance creep.” Surveillance creep occurs as surveillance gradually expands over time: the collection of more and more data becomes normalized or the increased use and sharing of data becomes normalized. One of the reasons why these trajectories can take off is that they piggy-back on people become accustomed to surveillance being no big deal or obviously worth experiencing for the benefits that it provides.

I’d also like to note that law professor and privacy scholar Danielle Citron makes excellent use these ideas in her law review article, “Extremist Speech, Compelled Conformity, and Censorship Creep.” This paper demonstrates that lots of important work needs to be done on creeping ideas and practices.

SOLOVE: Thanks, Evan, for providing us with some very interesting questions to think about. The book is Re-Engineering Humanity and is by Brett Frischmann and Evan Selinger. 

* * * *

This post was authored by Professor Daniel J. Solove, who through TeachPrivacy develops computer-based privacy training, data security training, HIPAA training, and many other forms of awareness training on privacy and security topics.  Professor Solove also posts at his blog at LinkedIn.  His blog has more than 1 million followers.

Privacy+Security ForumProfessor Solove is the organizer, along with Paul Schwartz of the Privacy + Security Forum (Oct. 3-5, 2018 in Washington, DC), an annual event that aims to bridge the silos between privacy and security. 

NEWSLETTER: Subscribe to Professor Solove’s free newsletter  

TWITTER: Follow Professor Solove on Twitter.

TeachPrivacy Ad Privacy Training Security Training 02