HORIZONS

How to build the future of social media

At the Polarization Lab in North Carolina, multidisciplinary researchers – including social scientists, statisticians and computer scientists – are breaking apart the social media status quo to rebuild it, one peer-reviewed brick at a time. Social scientist Prof Chris Bail tells us more…

WHAT’S WRONG WITH SOCIAL MEDIA PLATFORMS AS WE KNOW THEM?

We’ve just accepted that how social media is now, is how it’s always going to be. But Facebook started as a site for college students to rate each other’s physical attractiveness. Instagram was essentially a way to organise alcohol-based gatherings, and was originally called Bourbon. Why should we accept these platforms that were designed for sophomoric purposes as the status quo, as the inevitable?

Meanwhile, incivility, hatred and outrage have never been higher. There’s evidence that suggests social media is contributing to all those things. It’s certainly not the only contributor, but there’s growing consensus that it’s a major player.

[But before we make changes] we need to understand how platforms shape human behaviour. That’s what prompted us to say, “Okay, we need a social media platform for scientific research.”

IS YOUR SOCIAL MEDIA SITE BASED ON ANY PLATFORM IN PARTICULAR, OR IS IT COMPLETELY NEW?

We’re building our platform for two purposes. One is to simulate existing sites, like Twitter and Facebook. When you’re exploring interventions that could decrease positive behaviour, it’s dangerous to do it in the wild. So, we need a testing ground – in the world of computer science, we call it a sandbox.

It’s where we start to learn how to play.

But the thing that we’re much more excited about is using our site to explore the possibilities for social media more systematically.

WHAT POSSIBILITIES ARE THERE?

There are many other models that we could explore. Tech leaders say the point of social media is to connect people. That’s Mark Zuckerberg’s stated mission for Facebook.

On the one hand, that’s admirable. You can connect the world in largely positive ways – people in Ukraine can fundraise internationally, for example.

But we don’t know what connecting to that many people does to the human brain. The British anthropologist Prof Robin Dunbar famously discovered that we struggle to maintain meaningful relationships with more than 150 people. Promoting connection ad infinitum might create shallow, meaningless connections instead of the deeper ones that give the social cohesion that sustains civil society.

Facebook offers opportunities to connect, but we don’t know if it could be harmful to link up with so many people
CAN YOU GIVE ME AN EXAMPLE OF HOW YOUR SOCIAL MEDIA SITE HAS BEEN USED?

There’s an interesting debate going on among people who study social media about how anonymity might shape our behaviour. People tend to say things on social media that they would never say in real life, especially when they are anonymous, because there are no consequences.

But there’s another side of anonymity that’s less well understood. Imagine that I am a Republican in the United States and I see all this evidence that voter fraud didn’t happen, or maybe I’m sceptical of former President Trump’s claims that voter fraud happened. If I go on to Twitter and announce my view to my Republican followers, I might get attacked by people on my side. But if I’m anonymous, I might throw out the idea.

In other words, anonymity gives us the ability to explore unpopular ideas, and allows us to focus on ideas instead of the identities of the people who are voicing them. We wanted to know if anonymity could prevent some of the tribalist tendencies that we see on social media.

But we as researchers can’t walk into Facebook and say, “Hey, could we please make 1,200 of your users anonymous for two weeks?”

Not only is it logistically impossible, it would upset users. It probably couldn’t be done with high scientific validity. But on our platform, we connected people to talk anonymously about politics – either immigration or gun control – with a member of the other party in an anonymous context.

Half of our research team thought it would be bad and would lead to hateful statements and abusive rhetoric. While there were several conversations on our platform that got so toxic that we had to shut them down, the vast majority were extraordinarily productive. People actually exhibited less polarisation when they chatted with someone from the other party anonymously.

This is not the be-all and end-all study. The implication is not that Facebook should become anonymous tomorrow. But it raises the question, should platforms create a space for anonymous conversation under carefully controlled settings? Maybe.

AND IT COULD BE USED BY ACADEMICS AROUND THE WORLD?

The idea is to make a platform that anyone could alter to suit their research. At the Polarization Lab, we’re focused on politics, but there are so many other really important issues out there. Researchers in public health could use it to study the effects of social media on mental health, or the impact on vaccine uptake.

Social media’s algorithms are often blamed for the polarisation online. Most social media platforms are explicitly designed to spread information as far as possible. So if you are a software engineer, what you’re going to do is look for characteristics of messages that spread, and train your algorithm to identify and boost content with those characteristics.

People ask if the algorithm is good or not. Instead, we should ask what a good algorithm would look like. Social science could offer a number of designs for algorithms that would promote better behaviour. What if, instead of boosting divisive content, an algorithm boosts unifying content? Instead of boosting what one party says when it appeals to its supporters, why not boost content that both sides of the political debate like? In that way, social media could create consensus instead of creating division.

“Social media could become this experience of what we all agree on, instead of this dumpster fire of outrage it’s become”

It could go further than politics. You could do this across racial and ethnic groups, across genders. All of a sudden, social media could become this experience of what we all agree on, or all find interesting, important or useful. Instead of this, excuse my language, dumpster fire of outrage and sensationalism that it’s become.

IT SEEMS LIKE THIS KIND OF RESEARCH SHOULD HAVE BEEN DONE WHEN SOCIAL MEDIA PLATFORMS FIRST STARTED BECOMING POPULAR…

For a long time, social scientists like me struggled to get a lot of data. Compare us to physicists who have massive particle colliders, or biologists who can look at the entire human genome. We were usually studying a couple of dozen people. And that fundamentally limits what kind of questions you can ask.

The advent of social media, the mass digitisation of human language and the various digital traces that human beings leave behind meant we were finally able to do really exciting analyses of large groups of people.

People were calling it the Golden Age of social science. And in some ways it was. Many of us were fortunate enough to get data from places like Facebook and do some foundational research.

The trouble started about four years ago when academic research became deeply embedded in controversies at Facebook and other platforms. Most notable was the Cambridge Analytica case, where a massive amount of data about people was used, largely without their consent, to serve political ends.

The idea that scientific research could give nefarious actors access to potentially powerful information led tech companies to stop sharing their data [with academics].

PROF CHRIS BAIL

Chris is the director of the Polarization Lab at Duke University and a professor of sociology and public policy.