This weekend, the results of an experiment conducted by researchers and Facebook were released, creating a fierce debate over the ethics of the endeavor. The experiment involved 689,003 people on Facebook whose News Feed was adjusted to contain either more positive or more negative emotional content. The researchers were looking for whether this had an effect on these people’s moods. And it did, albeit a small one. People exposed to more positive content had posts that were more positive, and those exposed to more negative content had posts that were more negative. This was measured by the types of words they used.
The experiment launched a fierce response from critics, some of whom decried it as unethical and creepy. In my view, it isn’t productive to castigate Facebook or the researchers, as the problems here emerge from some very difficult unresolved issues that go far beyond this experiment and Facebook. I want to explore these issues, because I’m more interested in making progress on these issues than on casting stones.
For trial attorneys, a key component to winning is carefully selecting people for the jury and tailoring arguments to best influence, nudge, or perhaps even manipulate jurors into reaching a particular verdict. As a result, there is a hunger to learn about the private lives of jurors, and serving on a jury can entail a huge loss of privacy.
Facebook has settled with the FTC over its change to its privacy policies back in 2009. According to the FTC complaint, as summed up by the FTC press release, Facebook engaged in a number of unfair and deceptive trade practices: