Facebook and its Discontents
Why the Oversight Board's pending decision on Trump's account won't solve anything.
Sometime soon, Facebook’s Oversight Board—the quasi-independent panel of 20 lawyers, journalists, academics, human rights advocates and ex-politicians from around the world—is going to issue its first big ruling. That will be on whether to overturn the company’s decision to indefinitely suspend ex-President Trump’s access to his Facebook and Instagram accounts, a choice CEO Mark Zuckerberg made January 7 in the wake of The Former Guy’s fomenting of the assault on the US Capitol a day before. Whatever decision the board makes, it is bound to be controversial.
Most observers expect the oversight board to restore Trump’s accounts, giving him back his 35 million Facebook followers and 24 million Instagram fans. That’s because the board has framed the case before it as focused on two inflammatory Trump posts that led to his accounts’ suspension, rather than the totality of his usage of the two platforms to undermine public faith in the electoral process and the peaceful transfer of power.
The central problem we have with evaluating the role of Big Tech platforms like Facebook and Instagram in society is that the value system that built these amazing tools approaches the world from the individual outward rather than the society inward. Tech founders like Zuckerberg didn’t ask how should we use software to help make democracy as a whole healthier for all. They asked: how can we enable individuals to have more voice and more connections. Then, with the help and encouragement of venture capitalists dangling billion-dollar signs in front of them, they signed up millions of users, colonizing our public lives, turning users and their connections into products to be sold to advertisers and all along claiming that the whole resulting platform must be good for society because so many people were using it.
The fact that Facebook knows how to change its algorithms to make News Feed content less toxic to a democracy like the US but has only chosen to do so for brief periods around elections like the one last fall, ought to be enough proof both of the company’s awe-ful power and the arbitrariness of how it uses it. But no.
Last week, Nick Clegg, Facebook’s Vice President of Global Affairs (and previously the former Deputy Prime Minister of the United Kingdom and a one-time Nation magazine intern who I recall fondly from his stint there in 1990) posted a lengthy essay on his personal Medium page that tried to dance away from the responsibilities burdening the company. The piece, titled “You and the Algorithm: It Takes Two to Tango,” took on Facebook critics who have asserted that the company’s software manipulates people’s attitudes, opinions and desires, portraying us ordinary humans as powerless victims, robbed of our free will. Quite correctly, Clegg pointed out what users see on Facebook is more of a complex blend of what they consciously choose to see and how they prioritize who or what they are interested in, and what the platform’s algorithms “learn” from watching user behavior. Clegg also notes that Facebook doesn’t only serve up sensational content to ensure engagement, but that it also – at times – exercises judgment to nudge people toward more authoritative content, such as on COVID-19 or the 2020 election.
Clegg is right to point out that people aren’t powerless before Facebook, and it’s only because a small but very prominent group of critics like Tristan Harris, who is featured in the widely viewed Netflix documentary The Social Dilemma, have been so good at promoting that claim that Clegg gets to win points for knocking it down. Individually, none of us are powerless before any widely used technology. You can also shut down your computer, hide your face from surveillance cameras, stop driving your car, unplug your home from the electric grid, and find some wood to boil your water with if you want, too. But good luck trying to live or prosper off the grid.
Jillian York, the longtime director for international freedom of expression at the Electronic Freedom Foundation, has been wrestling with these issues since the rise of social media, and in her valuable new book Silicon Values: The Future of Free Speech Under Surveillance Capitalism she shares what she has learned from being on the opposite side of the coin. That is, hers is a well-informed view of how Big Tech platforms like Facebook, Twitter and YouTube came to dominate public life, but from the vantage point of many of the world’s most vulnerable people: human rights activists and pro-democracy organizers living under dictators, and nonconforming outsiders like sex workers.
Four things stood out for me from York’s book. First, that in the early days of the social web, many of the key decision-makers at then-nascent platforms (like Twitter counsel Alex Macgillivray, Google’s Nicole Wong and Facebook’s Dave Willner) were seriously trying to ensure that everyone, not just already powerful people in society, were able to use them to gain voice and attention. Second, that the platforms’ rapid growth combined with the hubris of their founders and their desire to keep costs low meant that they have always been playing catch-up, cleaning up failures rather than preventing them. (Well, what else do we expect when a company founder tells his minions to “move fast and break things”?)
Third, that none of the platforms have ever been free of politics and the biases that flow from serving the demands of powerful actors. York makes this point again and again, with examples from all over the world. The way these companies handles issues like terrorism and extremism is deeply shaped by the needs of US foreign policy, in particular, with the result that, for example, decisions about Palestinian speech on Facebook are routed through Israeli authorities while posts by Israelis calling for the murder of Palestinians have been allowed to stay on the site despite complaints. (York notes that one of the first 20 members of Facebook’s Oversight Board is Emi Palmor, who as former director general of Israel’s Ministry of Justice petitioned Facebook to censor Palestinian human rights defenders and journalists.)
Fourth, and perhaps most unexpected, is discovering how York’s own views have shifted from preferring that Big Tech platforms not moderate speech at all, or with very few exceptions, to now believing that they must, with the help of democratically-representative processes, find ways to tamp down abuses of free speech that harm people. That means concerted efforts to make it much harder for online mobs to form and to reduce the reach of speech inciting violence against others.
York concludes with hard questions that do not resolve themselves easily, like whether the harm of misinformation warrants its censorship or the right balance between sexual freedom and the protection of children. And the only answer she gives is a call for greater transparency in how the companies moderate speech now, drawing on the Santa Clara Principles that have been endorsed by more than 100 civil society organizations. “Companies should provide users with information about how the data is fed to recommendation algorithms, obtain meaningful consent for how the data is used, and give users more options about what they see in their feeds. They should work to immediately conclude civil society in policymaking, transparently, and should conduct a full audit to evaluate the compatibility of existing policies with human rights standards,” she writes.
We are just beginning to see lawmakers take on this challenge. In California, for example, Assemblyman Jesse Gabriel has introduced the Social Media Transparency and Accountability Act (AB-587), which would require large social media companies to much more fully disclose how they moderate content related to online hate, disinformation, extremism, harassment and foreign interference. The bill includes tough provisions requiring the platforms to file semi-annual reports not only describing their policies and any changes they make in them, but key metrics like what kinds of content is being “actioned” and how it is being flagged (human review or AI), along with training materials used by human content moderators and how the company responds to user reports of terms of service violations. Believe it or not, no Silicon Valley platform currently provides anything close to this level of disclosure. (Speaking of which, my smarter younger brother Dave Sifry has helped advise on the development of this legislation as part of his work with the ADL’s Center on Technology and Society.)
In the meantime, one man, Mark Zuckerberg, has a power to shape whole societies unlike any ever before possessed, and while he seems to dimly understand that this ain’t right, we shouldn’t expect that anything he does or that is connected to him with resolve the problem that Facebook presents to society. As I’ve written before, Zuckerberg has built the world’s biggest social-network power plant without any government authority limiting its ability to pollute. It has put Chernobyl-style reactors everywhere it can, because it can. Using our legislative and regulatory power, we have to force Facebook to stop doing things that produce toxic social effects, like selling targeted advertising or enabling instantaneous connections. If we don’t insist on these changes, it won’t really matter whether Trump gets his accounts back or not—the same conditions that made Trump such a successful platform strongman will keep producing others.
-Related: The entire life-cycle of local news is illustrated in an in-depth story by NBC News’ Brandy Zadrozny on Beaver County, PA. As the steel economy hollowed out and the grandchildren of the founder of a once-vital local paper, the Beaver County Times, sold to outsiders, which led to its gutting, the county has become a “news desert,” one of twenty in the state where there is only one or no newspapers. Into the gap has come local Facebook groups run by overwhelmed volunteer admins that are rife with hysteric and racist rumors that local government and police try vainly to counter, and a new investigative news site, BeaverCountian.com, started by a successful techie who returned to his hometown. Syracuse communications professor Jennifer Grygiel comments, "In a system with inadequate legitimate local news, they may only be able to get information by posting gossip and having the police correct it. One could argue this is what society will look like if we keep going down this road with less journalism and more police and government social media."
For more background on how Facebook is warping local news ecosystems, see my conversation with Professor Kjerstin Thorson of Michigan State University which was featured in the January 26 issue of The Connector.
Odds and Ends
-The FBI is identifying many of the rioters who entered the US Capitol three months ago using a wide array of technological tools, from facial recognition matches to driver’s licenses or social media posts, license-plate readers, cell-tower location searches and “a remarkably deep catalogue of video from surveillance systems, live streams, news reports and cameras worn by the police … that day,” Drew Harwell and Craig Timberg report for The Washington Post. “Whenever you see this technology used on someone you don’t like, remember it’s also being used on a social movement you support,” said Evan Greer, director of the digital rights advocacy group Fight for the Future. “Once in a while, this technology gets used on really bad people doing really bad stuff. But the rest of the time it’s being used on all of us, in ways that are profoundly chilling for freedom of expression.”
-It's time to regulate AI tools that claim to interpret human emotions, Kate Crawford opines in Nature. Emotion-recognition software is part of products for monitoring workers and schoolchildren, she notes, despite a lack of scientific rigor and reports that the tools judge job applicants and students unfairly. (I can’t wait to read Crawford’s new book, Atlas of AI: Power, Politics and the Planetary Costs of Artificial Intelligence.)
-Public interest technologist extraordinaire Ashkan Soltani is blowing the whistle on the ACLU, which hired him last summer to perform a privacy audit on its data-sharing practices, to say that “the ACLU’s updated privacy statements do not reflect the full picture of their practices.” He says, “it’s important that nonprofits are bound by the same privacy standards they espouse for everyone else.”
-Still paying attention to non-fungible tokens? Read Anil Dash, a veteran techie who made a very early NFT back in 2014, in The Atlantic on why this is a gold rush filled with grifters and spammers.
-The good folks at Consumer Reports worked with trained volunteers at 120 locations across the US to test tap water for arsenic, lead and long-lasting chemicals called PFAS, and the news is disturbing. Go here to read the story and here to look up locations near you.