Social media companies building 'a society that is addicted, outraged, polarized,' critic tells senators
Top executives from social media giants were questioned Tuesday by U.S. senators about how they choose to promote content on their platforms — and were confronted by one of their industry’s chief critics.
Sen. Chris Coons, D-Del., held a hearing with representatives from Facebook, YouTube and Twitter and focused on their business models and how that drives their decision making, rather than on their attempts to moderate or remove content.
Coons, who chairs the Senate Judiciary Committee’s Subcommittee on Privacy, Technology, and the Law, was joined in this emphasis by Sen. Ben Sasse, R-Neb., the ranking Republican on the panel.
Sasse tried to get the representatives from the social media companies to engage substantively with critiques from Tristan Harris, a former Google engineer who in 2015 founded what would become the Center for Humane Technology.
Harris was the star of a major documentary on the social media companies last year, “The Social Dilemma,” and he leveled many of the same arguments he voiced in that film against the tech behemoths on Tuesday.
“Their business model is to create a society that is addicted, outraged, polarized, performative and disinformed,” Harris said of social media companies. “And while they can try to skim the major harm off the top and do what they can, and we want to celebrate that ... it’s just fundamentally that they’re trapped in something that they can't change.”
Harris talked about the ways Facebook, YouTube, Twitter and TikTok — the one company that did not have a representative at the hearing — make more money the longer people stay on their platforms. It has now been well-documented by researchers that these companies appear to promote whatever content will keep users on their sites, in what Harris called a “values-blind process.”
That can lead to millions of Americans being influenced by content that is untrue and even harmful, in large part because these social media companies promoted this disinformation to them.
But the main problem is that no one besides the companies knows for sure how the algorithms that drive their recommendations work.
Harris alleged that the way these companies appear to be functioning is a national security threat as well.
“If Russia or China tried to fly a plane into the United States they’d be shot down by our Department of Defense. But if they try to fly an information bomb into the United States, they’re met by a white-gloved algorithm from one of these companies, which says, ‘Exactly which ZIP code would you like to target?’” Harris said.
He was joined in the hearing by another tech skeptic, Joan Donovan, the research director at Harvard’s Shorenstein Center on Media, Politics, and Public Policy.
The tech officials who testified were Monika Bickert, Facebook’s vice president for content policy; Alexandra Veitch, a government affairs executive for YouTube; and Lauren Culbertson, Twitter’s U.S. public policy chief.
Sasse’s attempts to produce a meaningful debate between Harris and the three social media executives was largely unsuccessful. Bickert emphasized that Facebook wants to cultivate a healthy long-term relationship with its users and that promoting bad information doesn’t help them do that. Veitch gave a version of the same response. “Misinformation is not in our interest,” the YouTube executive said.
Sasse also dismissed talk of repealing Section 230 of the Communications Decency Act of 1996, which has been a hobby horse for some lawmakers and the subject of targeted regulation proposals by others. Section 230 essentially prevents social media companies from being held legally responsible for what is posted by users on their platforms, but Harris also seemed skeptical that repealing Section 230 was the best route forward.
Harris, however, warned that social media companies are behaving in ways that are dangerous for American democracy. “If we are not a coordinated society, if we cannot recognize each other as Americans, we are toast,” he said. “If we don’t have a truth that we can agree on, we cannot do anything on our existential threats.”
Harris also said that the choice for the world is whether America and other democratic societies can figure out how to transition into the digital age in a way that preserves free speech while also developing ways to reduce the harms of disinformation.
Coons, for his part, said he shared Harris’s view that “the business model of social media requires [them] to accelerate” the time users spend on their platforms.
He pushed the tech executives to open up.
“I think greater transparency about ... how your algorithms actually work and about how you make decisions about your algorithms is critical. Are you considering the release of more details about this?” Coons asked.
Only Culbertson, the Twitter executive, responded. “We totally agree that we should be more transparent,” she said, and mentioned that Twitter is working on what she called a “blue sky initiative,” which she said could “potentially create more controls for the people who use our services."
Coons said he would like to discuss what kind of steps are necessary in his next hearing. That could potentially include government regulation to require more algorithmic transparency from the tech companies.
Some advocates and experts think forcing social media companies to be transparent about how their algorithms work is a key first step. Many of these same experts believe, as author Francis Fukuyama recently wrote, that deplatforming — the act of removing troublesome users from social media — is “not a sustainable path for any modern liberal democracy.” Donald Trump, for example, was banned from Twitter and Facebook while he was still the sitting president, highlighting concerns that social media companies are becoming more powerful than duly elected public officials, even if many feel such a suspension was appropriate at the time.
But some lawmakers don’t think algorithmic transparency is enough. Their view is that external pressure is needed to force the big tech companies to take actions to protect more vulnerable users from the harms of its profit-driven algorithms.