Mark Zuckerberg must be stopped
How to turn Facebook and other social media platforms from bugs into features
In 2003, as a sophomore at Harvard, Mark Zuckerberg created a website called Facemash that displayed pictures of Harvard students and let other students rank their physical attractiveness—with the rankings then displayed on the website.
Zuckerberg has told us many times that he is committed to making the world a better place. Maybe in 2003 he thought that what the world most needed was for people judged unattractive to feel worse about their looks.
Zuckerberg’s next mission was to start a different kind of website for the Harvard community—a site that would be called TheFacebook.com. But there was a problem: Some other Harvard students had been working on a website called ConnectU that would compete with the site Zuckerberg intended to launch. And, awkwardly, they had enlisted him to build their site.
Zuckerberg opted not to tell them about his plans to build a rival site. (It’s not clear that he even had such plans until they told him about their plans.) Instead, he worked secretly and feverishly on his project while assuring them he would finish theirs. In a text message to a friend, he described his relationship to them this way: “They made a mistake haha. They asked me to make it [the ConnectU website] for them. So I'm like delaying it so it won't be ready until after the facebook thing comes out.”
Which is what happened.
But Zuckerberg wasn’t finished. When squashing the competition, it’s important to err on the side of oversquash. According to Business Insider, after the ConnectU folks got someone else to finish their website, Zuckerberg hacked it and sowed chaos by deactivating some accounts and changing the privacy settings on others to render them invisible.
And after the ConnectU folks lobbied the Harvard Crimson to write a story about their claim that Zuckerberg had stolen their idea, Zuckerberg used private Facebook login data to hack email accounts so he could see what the two Crimson journalists working on the story were saying about it. He once texted a Harvard friend about how much power Facebook gave him: “if you ever need info about anyone at harvard just ask. i have over 4000 emails, pictures, addresses… people just submitted it. i don’t know why. they ‘trust me.’ dumb fucks.”
I could go on, but you get the idea: If we could design a moral gyroscope to guide the man who runs the most powerful conglomeration of social networks in the world (Facebook and Instagram, not to mention WhatsApp), it probably wouldn’t be an exact replica of Zuckerberg’s moral gyroscope.
But why bring this up now, given how long we’ve known about Zuckerberg’s checkered past?
Because (1) the political polarization of America has reached truly ominous proportions; (2) last week a new study reinforced concerns that social media are making the problem worse; (3) this study hints at things Zuckerberg could do to make Facebook a more constructive force. So I think it’s a good time to be reminded that, if we want Zuckerberg to prioritize helping the world over becoming even richer and more powerful, we’ll have to put pressure on him. All the more so in light of the fact that today a federal judge dismissed anti-trust lawsuits brought against Facebook by the Federal Trade Commission and more than 40 states.
The new study, authored by the psychologists Steve Rathje, Jay Van Bavel, and Sander van der Linden, was published in the Proceedings of the National Academy of Sciences. Its title—“Out-group animosity drives engagement on social media”—pretty much tells the story, and it’s a story that may sound familiar; I’ve noted more than once in this newsletter that people who want to increase their intra-tribal stature are incentivized by social media to intensify intertribal hostility. Still, the details of the study are interesting and important.
The researchers looked at social media posts, on both Facebook and Twitter, that came from two kinds of partisan sources: (1) media identified as liberal or conservative; and (2) members of Congress who are Democrats or Republicans. The study found that mentioning the out-group (conservatives in the case of liberals/Democrats, liberals in the case of conservatives/Republicans) hugely increased the chances that a post would be shared. In fact, this increased those chances way more than such well-documented share boosters as negative language and “moral-emotional” language.
The researchers also looked at reactions to these posts—which was more illuminating in the case of Facebook than Twitter, since Facebook offers an array of emoticons to use in registering reactions, such as the red-forehead frown face that signifies anger. The basic finding:
Out-group language strongly predicted “angry” reactions (as well as “haha” reactions, comments, and shares), and in-group language strongly predicted love reactions… Thus, posts about the out-group may be so successful because they appeal to emotions such as anger, outrage, and mockery. Indeed, the “angry” reaction was the most popular reaction on Facebook in seven of the eight datasets analyzed.
One thing I like about this study is the way it underscores Zuckerberg’s untapped capacity to do good. After all, these researchers were using tools that are at his disposal. Like them, he can correlate the language in Facebook posts with the emotional reactions they get. In fact, he no doubt has AI tools that would let him do a much subtler job of parsing the language. If he wanted to revise Facebook’s algorithm in a way that sharply reduced the number of out-group references that stir up anger, he could do that.
I’m not saying this would be easy or without risk. The algorithmic adjustment might, for example, subdue the spread of negative but accurate and important messages about an out-group. Or the adjustment—even if formulated generically, without intended bias toward a party or ideology—might wind up disproportionately impeding the messaging of one party or ideology. Which would be, if nothing else, a big public relations problem for Facebook.
I could go on. Big and difficult questions arise the moment you start fiddling with an algorithm as consequential as Facebook’s. But remember: Facebook is already fiddling with this algorithm. Algorithms aren’t laws of nature that social media companies disinterestedly implement; they’re human inventions, designed to make the company successful and periodically modified in light of that goal. So Facebook is already answering the big and difficult questions that arise once you start fiddling with algorithms. And it tends to answer them this way: Do the thing most likely to maximize “engagement,” which happens to be the thing that maximizes the wealth and power of Mark Zuckerberg.
Have I been unfair to Zuckerberg? Can I really size up his character based on a few creepy things he did in college? Didn’t I only last week, in this very newsletter, write about attribution error, which often leads us to see people’s bad behavior as a reflection of their character when it may have actually been a product of circumstance, and not especially predictive of future behavior? Yes, I did. And besides, people change.
But in a way this is a moot point. As a rule, corporations—even when run by non-creepy people—pursue profit, period, unless forced by public pressure and/or government action to sacrifice profit to some other goal. So the question isn’t whether Zuckerberg is way worse than the average wildly successful entrepreneur. The question is whether he’s way better—whether he’d be willing to voluntarily and pro-actively sacrifice profit for the public good even in the absence of intense pressure to do that. His history at Harvard suggests that the answer is no, and a year ago the Wall Street Journal provided more recent evidence to that effect.
The Journal ran a deeply reported piece about Facebook’s exploration, in 2017 and 2018, of ways to fight political polarization. Amid intense public pressure, it created a task force called “Common Ground” and examined various ideas for making Facebook less divisive. But, reports the Journal, “in the end, Facebook’s interest was fleeting.” Zuckerberg “signaled he was losing interest in the effort to recalibrate the platform in the name of social good.” The Common Ground task force was disbanded.
One passage in the Journal piece, describing a September 2018 meeting about reorganizing Facebook’s newsfeed team, is of special relevance to the findings of the PNAS study:
Managers told employees the company’s priorities were shifting “away from societal good to individual value,” said people present for the discussion. If users wanted to routinely view or post hostile content about groups they didn’t like, Facebook wouldn’t suppress it if the content didn’t specifically violate the company’s rules.
One of the anti-trust suits against Facebook that was dismissed today will likely be filed again soon, in revised form, and it could conceivably lead to the division of Facebook, Instagram, and WhatsApp into three separate companies. That would be fine with me, but I don’t think it’s enough. After all, for all three companies, profit will still be the bottom line, which means that, all other things being equal, “engagement” will be prioritized even if that means helping to tear America and/or the world apart.
And so too with other social media platforms. Though Zuckerberg may be a more ethically dubious character than the people running Twitter and YouTube, one thing they all have in common is that making money is their job.
In light of all this, I’d submit for consideration a policy remedy that is in a way more radical than breaking up companies:
1) Compel big social media platforms like Facebook, Twitter, and YouTube to make their algorithms fully transparent;
2) Compel them to offer an API—an application programming interface—so that third parties could develop control panels that social media users could subscribe to if they wanted. Such a panel might let me choose to see, say, 30 percent more tweets about a given subject, or 40 percent fewer Facebook posts that make me angry, or… or let me choose along various other dimensions that, right now, I can’t even imagine.
Would Americans, allowed to choose among competing control panels, make more constructive choices than social media companies are now making on their behalf? Only one way to find out. And we might find out more than that: Researchers, like those who wrote the PNAS paper, could study the different control panels to see which seemed to have the best effects on the mental health of users and the overall health of society. Which in turn might inform consumer choice.
And, leaving aside this question of how wisely Americans would use the freedom imparted by this policy, I’d say the freedom itself is a strong argument for the policy. Right now we have no real escape from an oligopoly of algorithms that we’re not even allowed to see. I’d object to that on grounds of principle even if Facebook weren’t run by Mark Zuckerberg.
The momentum of unfettered capitalism is to make every last cent of profit. Titans of industry like winning as measured by profits and power and corporations are set up and run to increase both. But most industries sooner or later have little variation in how they can maximize profits such that that last, say, ten percent of profit potential remains elusive. But when the tobacco companies figured out back in the 60's how to manipulate the formula of their cigarettes to increase their addictiveness, the processed food makers weren't far behind. Social media has followed that formula well, and their algorithms spike our outrage to the same insidious effect. That extra few percent of profit from staying "engaged" works out to be hundreds of billions of dollars.
Social good never stood a chance.
Nice post :) Regarding the API, Control Panel that gives users more control... People can already opt out of social media and/or control who they follow. I just don't see those that stick around and continue to consume all that is bad to utilize such filters/controls. They clearly want the tribal/divisive content. At this point, it seems we need K-12 to include education on social media, tribalism, etc. Society as a whole needs to change its attitudes. Attacking the social media companies is fine, but IMO just a Band-Aid to what really ails society.