PRIVACY + SECURITY BLOG

News, Developments, and Insights

high-tech technology background with eyes on computer display

Strahilevitz Information and Exclusion

Lior Strahilevitz, Deputy Dean and Sidley Austin Professor of Law at the University of Chicago Law School recently published a brilliant new book, Information and Exclusion (Yale University Press 2011).  Like all of Lior’s work, the book is creative, thought-provoking, and compelling.  There are books that make strong and convincing arguments, and these are good, but then there are the rare books that not only do this, but make you think in a different way.  That’s what Lior achieves in his book, and that’s quite an achievement.

I recently had the opportunity to chat with Lior about the book. 

Daniel J. Solove (DJS): What drew you to the topic of exclusion?

Lior Jacob Strahilevitz (LJS):  It was an observation I had as a college sophomore.  I lived in the student housing cooperatives at Berkeley.  Some of my friends who lived in the cooperatives told me they felt morally superior to people in the fraternities and sororities because the Greek system had an elaborate, exclusionary rush and pledge process.  The cooperatives, by contrast, were open to any student.  But as I visited friends who lived in the various cooperative houses, the individual houses often seemed no more heterogeneous than the fraternities and sororities.  That made me curious.  It was obvious that the pledging and rushing process – formal exclusion – created homogeneity in the Greek system.  But what was it that was creating all this apparent homogeneity in a cooperative system that was open to everyone?  That question was one I kept wondering about as a law student, lawyer, and professor.

That’s why page 1 of the book begins with a discussion of exclusion in the Greek system.  I start with really accounts of the rush process by sociologists who studied the proxies that fraternity members used to evaluate pledges in the 1950s (attire, diction, grooming, firm handshakes, etc.)  The book then brings us to the modern era, when fraternity members peruse Facebook profiles that provide far more granular information about the characteristics of each pledge.  Proxies still matter, but the proxies are different, and those differences alter the ways in which rushing students behave and fraternities exclude.

DJS: What is the central idea in your book?

LJS: The core idea is that asymmetric information largely determines which mechanisms are used to exclude people from particular groups, collective resources, and services.  When the person who controls a resource knows a lot about the people who wish to use it, she will make decisions about who gets to access it.  Where she lacks that information, she’ll develop a strategy that forces particular groups to exclude themselves from the resource, based on some criteria.  There’s a historical ebb and flow between these two sorts of strategies for exclusion, but we seem to be in a critical transition period right now thanks to the decline of practical obscurity in the information age.

That sounds really abstract, so let me illustrate the idea with a historical example: eighteenth century British welfare.  The population was extremely immobile.  Poor people often were born, lived, and died in a single county.  The local charities that dispensed welfare knew who was a genuine hard-luck case and who was a lout.  They could give aid to the former while refusing the latter.  Then, in the nineteenth century, rapid urbanization occurred.  Poor people suddenly became mobile, and every local dispenser of charity began encountering scores of people he had never seen before.  He could no longer separate out the deserving and the undeserving poor easily.  So dispensers of public charity in England switched to a “workhouse” model.  You were only eligible for government welfare if you live in a workhouse.  Life in the workhouses was crummy.  They were bleak.  They were crowded.  There was no booze allowed. And they put people to work if they wished to be fed.  Unable to sort effectively among different kinds of welfare-recipients, the British develop a test that the “deserving poor” were much more likely to pass.  Instead of excluding the louts from welfare, they forced the louts to exclude themselves.

Today, we’re increasingly coming to resemble eighteenth century Britain, not nineteenth century Britain.  Look at what India is doing with biometrics and databases right now.  They are using modern technologies to “turn back the clock” to the sorts of relationships between the state and the citizen that we saw in eighteenth century Britain.  And they’re correctly invoking notions of meritocracy, fairness, and efficiency to do it.

As you’ve written, there are “digital dossiers” on all of us, which are made increasingly available at very low costs.  Facial recognition software, combined with massive public and private photo databases are eroding privacy in public spaces.  DNA databases are growing.  Behavioral profiling and data mining are exploding.  Location-tracking through GPS-enabled smartphones is becoming commonplace.  So the dynamics of exclusion are shifting once again. . . away from strategies that bundle access to collective resources with disamenities that are unpalatable to members of the group targeted for exclusion.  The government and the private sector have lots of information about individuals once again, so they can sort people themselves rather than trying to induce people to self-assess and self-sort.  My book explores what’s at stake with this shift from one form of exclusion to another.  You can achieve homogeneity with either strategy, but the different strategies produce very different sets of costs and benefits for the people being excluded, the people being included, and the people doing the excluding.

DJS: What do you consider to be the most surprising or controversial implication of your theories in the book?

The most controversial idea is that the government ought to use information policy to affect private actors’ choices about whether to exclude and how to exclude.  It’s uncontroversial that the government can ban private discrimination by employers or landlords.  But we have to realize that the government can affect the incidence of discrimination through more creative tools as well.  Where the government sees employers engaged in statistical discrimination, it can supplement traditional law enforcement tools with “searchlight strategies” to publicize previously private information.  To take a salient example, we know that employers seeking to hire entry-level blue collar workers discriminate against African American males in part because they overestimate the propensity of African American males to have criminal records.  Because of this overestimation, publishing complete information about criminal histories for everyone would likely reduce the incidence of statistical discrimination, increasing the employment prospects of African American males as a group.

I extend this searchlight approach to develop a bunch of proposals for how the state can use information policy to further antidiscrimination interests.  For example, the book proposes promoting the use of Electronic Medical Records as a strategy for reducing physicians’ tendency to prescribe narcotics in a racially discriminatory way, and subsidizing Yelp and Angie’s List to make people less reliant on ethnic preferences in selecting contractors.  These strategies can supplement orthodox tools of antidiscrimination law like public enforcement and private causes of action.

Of course, this approach to combating discrimination raises all kinds of thorny questions: Should the government suppress information when doing so might reduce undesirable forms of statistical discrimination?  Once information is released, can it be revoked if its disclosure surpisingly backfires?  What should be done to weed out false information or customer feedback that are themselves influenced by racial animus?  I talk about the answers to these important questions in the book.

The book also considers whether racism prompts people to move to residential communities built around mandatory membership golf communities.  That’s another controversial hypothesis, and its part of a discussion of how real estate developers are really selecting populations of residents when they decide which amenities should be bundled into a new community. Yet those decisions about bundling go virtually unregulated by fair housing laws.

DJS: You have very nuanced views about privacy, but my sense is that you see a small role for privacy in a well-functioning society — not a large one.  Is that correct?  And you argue that we need to distinguish between instances where privacy is desirable and areas where it is counterproductive.  How are we to make these determinations?  Do you have a set of guiding factors or considerations?

I believe that privacy is an intermediate good.  It can be a means toward important ends, but is never an end unto itself.  Privacy can be undesirable when it results in racial discrimination, or cyber-bullying, or fraud, or sexual harassment in public spaces.  Privacy is worth fighting for when it facilitates human intimacy, or when it nurtures representative democracy, or when it prompts people to seek out medical attention, or when it fosters experimentation that leads to self-discovery.  A satisfying answer to the question, “What’s the benefit of more privacy?” has to be something beyond “more privacy.”  Advocates and scholars sometimes fail to appreciate this essential aspect of information privacy.

To take an example that’s particularly near and dear to my scholarly agenda, can you imagine what life would be like on our urban and suburban roadways if cars didn’t have license plates?  There’d be more “privacy.”  There’d also be a gigantic increase in unlawful, aggressive, and antisocial driving.  We’d have many more roadway accidents and fatalities.  Privacy advocates have helped kill off red-light cameras, automated ticketing for speeding based on EZ Pass or toll booth data, and other traffic safety innovations.  What important interests are being served by privacy in this context?  In the context of red-light cameras with proper data minimization controls, I don’t see any legitimate interest that privacy is serving, but I see a lot of blood on the pavement if privacy interests kill off the technology’s use. There is also the boy crying wolf problem.  Every time privacy is invoked to defend trivial interests, it weakens the force of privacy arguments in contexts where privacy protections do enormous good.

Thanks, Lior, for answering my questions.  Lior’s book is Information and Exclusion (Yale University Press 2011).  This is definitely a book for the must-read list.

Original Posted on Concurring Opinions

* * * *

This post was authored by Professor Daniel J. Solove, who through TeachPrivacy develops computer-based privacy training, data security training, HIPAA training, and many other forms of awareness training on privacy and security topics. Professor Solove also posts at his blog at LinkedIn. His blog has more than 1 million followers.

Professor Solove is the organizer, along with Paul Schwartz, of the Privacy + Security Forum and International Privacy + Security Forum, annual events designed for seasoned professionals.

If you are interested in privacy and data security issues, there are many great ways Professor Solove can help you stay informed:
*
LinkedIn Influencer blog
*
Twitter
*
Newsletter

TeachPrivacy Ad Privacy Training Security Training 01