By Amy Barrett

Published: Tuesday, 12 April 2022 at 12:00 am


In recent memory, social media seems to have done more to drive us apart than bring us together. Long gone are the days of using Facebook to find new friends at university or checking in on Twitter to keep an eye on the news. Instead, social networks are fast becoming a maddening cacophony where users appear to compete for who can provide the hottest, most extreme take and the prize, for better or worse, is visibility.

But what if we could start again? What if we could build a network that nurtured social media’s best qualities and cut out the bad?

At the Polarization Lab in North Carolina, US, a team of multidisciplinary researchers including social scientists, statisticians and computer scientists, are breaking apart the social media status quo to rebuild it one peer-reviewed brick at a time.

Together they’ve created real social media sites from scratch, in the lab, with real human users, to find out what happens when you play with the rules. Prof Chris Bail, founding director of the lab at Duke University, explains what happened next.

What’s wrong with social media as we know it now?

We’ve just accepted how social media is now is how it’s always going to be. But the status quo doesn’t make a lot of sense. Facebook started as a site that allowed college students to rate each other’s physical attractiveness. Instagram was essentially a way to organise alcohol-based gatherings, and was originally called Burbn. TikTok and YouTube were founded to share funny videos. So the question that I think more people should be asking is, why should we accept these platforms that were designed for kind of sophomoric purposes as the status quo, as the inevitable?

Meanwhile, the world is collapsing around us in many ways. Incivility, hatred, outrage have never been higher. There’s a variety of evidence that suggests social media is probably contributing to all those things. It’s certainly not the only contributor, but there’s growing consensus that it’s a major player.

[But before we make changes] we need to understand how platforms shape human behaviour. That’s what prompted us to say, OK, we need a social media platform for scientific research.

Is your social media site based on any platform in particular, or is it completely new?

We’re building our platform for two purposes. One is to simulate existing platforms, like Twitter and Facebook. When you’re exploring interventions that could increase or decrease positive behaviour, if it decreases positive behaviour then it’s dangerous to do it in the wild. So, we need a testing ground – in the world of computer science, we call a sandbox. It’s where we start to learn how to play.

But the thing that we’re much more excited about is that our site could be used to explore the space of possibilities and social media more systematically.

What possibilities are there?

There are many other models that we could explore. A lot of tech leaders say the point of social media is to connect people, to connect the world. That’s Mark Zuckerberg’s stated mission for Facebook.

On the one hand, that’s admirable. You can massively connect the world in largely positive ways – people in Ukraine can fundraise internationally.

But we don’t know what connecting to that many people does to the human brain. The British anthropologist Robin Dunbar famously discovered that we struggle to maintain meaningful relationships with more than 150 people.

Promoting connection ad infinitum might create shallow, meaningless connections instead of the deeper connections that give the kind of social cohesion that sustains civil society.

Can you give me an example of how your social media site has been used?

So there’s an interesting debate going on among people who study social media about how anonymity might shape our behaviour. People tend to say things on social media that they would never say in real life, especially when they are anonymous, because there’s no consequences. Your readers may have had an experience on social media within anonymous account that was upsetting, maybe even scary.

But there’s another side of anonymity that’s less well understood, and that is that it provides people with the opportunity to explore ideas outside of peer pressure.

Imagine that I am a Republican in the United States and I see all this evidence that voter fraud didn’t happen, or maybe I’m sceptical of former President Trump’s claims that voter fraud happened.

If I go on to Twitter and announce my view to my Republican followers, I might get attacked by ‘my’ people. I might not do it. But if I’m anonymous, I might throw out the idea.

In other words, anonymity gives us the ability to explore unpopular ideas, and allows us to focus more on ideas instead of the identities of the people who are voicing them.

We wanted to know if that could prevent some of the tribalist tendencies that we see on social media.

A lot of social media companies are grappling with this right now. Should we make everybody disclose every detail of their identity, or should they maybe be allowed some degree of anonymity? But we as researchers can’t walk into Facebook and say, hey, could we please make 1,200 of your users anonymous for two weeks? Not only is it logistically impossible, it would upset users. It probably couldn’t be done with high scientific validity. And it would create a huge PR nightmare for Facebook.

But on our platform, we connected people to talk anonymously about politics – either immigration or gun control – with a member of the other party in an anonymous context.

Half of our research team thought it would be bad and would lead to hateful statements and abusive rhetoric. And there were several conversations on our platform that got so toxic that we had to shut them down.

But the vast majority of conversations were extraordinarily productive. And people actually exhibited less polarisation when they chatted with someone from the other party anonymously.

This is not the end-all be-all study. The implication is not that Facebook should become anonymous tomorrow.

But it raises the question, should platforms create a space for anonymous conversation under carefully controlled settings? Maybe. So that’s an example of the type of research we can do.

And it could be used by academics around the world?

The idea is to make a platform that any researcher could alter and then put it on the App Store to do any kind of research. At the Polarization Lab, we’re focused on politics, but there are so many other really important issues out there.

I would be elated if our effort got picked up by, say, researchers in public health who are trying to study the impact of social media on mental health, or the impact of social media on vaccine uptake.

Social media’s algorithms are often blamed for the polarisation online.

There’s evidence that the algorithms used by social media sites are not up to the task.

Most social media platforms are explicitly designed to spread information as far as possible. So, if you are a software engineer and you’re trying to figure out how to spread a message, what you’re going to do is look for characteristics of messages that spread really far. Then you train your algorithm to identify and boost messages with those characteristics.

People ask, is the algorithm good or not? Instead, we should be asking: what would a good algorithm look like?

There are a number of ideas that social science could offer about how to design algorithms that would promote better behaviour. One that I’m particularly fond of is an algorithm that, instead of boosting divisive content, boosts unifying content.

Imagine you’ve got a bunch of Labour voters and a bunch of Conservative voters. Facebook’s algorithm boosts the Tories when they say something that appeals to the Tories, right? But there is a lot of content out there that both Conservatives and Labour like. So, why not boost that content? In that way social media could actually optimise for creating consensus instead of creating division.

It could go further than politics. You could do this across racial and ethnic groups, across genders. All of a sudden social media could become this experience of what we all agree on, or all find interesting, important or useful. Instead of this, excuse my language, dumpster fire of outrage and sensationalism that it’s become.

But sometimes conversations come out of that fire that are really important, and that wouldn’t be held any other way because they might not unite people.

Yeah, absolutely. There are many good examples of this.

The Black Lives Matter movement created the largest ever protest in the United States. So there’s really good reason to think that there’s a power there.

The question I would ask, taking the long view for a moment, is what has been the impact of these social media campaigns?

When you have massive campaigns that involve many, many people, if it’s true that people struggle to maintain meaningful connections with large groups then it follows that most of these large movements are going to die or going to lack the kind of sustained influence that we might like.

If you look at American public opinion of Black Lives Matter, it went from extremely positive to somewhat neutral, and now to slightly negative.

It seems like this kind of research should have been done when social media platforms first started becoming popular…

For a long time, social scientists like me struggled to get a lot of data. Compare us to physicists who have massive particle colliders, or biologists who can look at the entire human genome. We were usually studying a couple dozen people. And that fundamentally limits what kind of questions you can ask.

In some ways, the advent of social media, the mass digitisation of human language and the various digital traces that human beings leave behind meant we were finally able to do really exciting analysis of large groups of people. The great sociologist Duncan Watts said social science had finally found its telescope.

People were calling it the Golden Age of social science. And in some ways it was. Many of us were fortunate enough to get data from places like Facebook and do some foundational research.

The trouble started about four years ago when academic research became deeply embedded in controversies at Facebook and other platforms. Most notable was the Cambridge Analytica case, where a massive amount of data about people was used, largely without their consent, to serve political ends.

The idea that scientific research could give nefarious actors access to potentially really powerful information led tech companies to stop sharing their data [with academics].

So, we don’t know much about the world’s top social media platforms.

There are foundational questions in the nascent field that we call computational social science – is it video-based? Is it text-based? Is it anonymous or not? – and the significant differences across platforms might be shaping human behaviour in different ways.

This is basically what prompted us to step back and say, well, we have two choices. One is we can wait patiently outside the social media companies and hope for the opportunity to do some research on their platforms. Or, we come up with our own.

About our expert

Prof Chris Bail is a professor of Sociology and Public Policy at Duke University, where he directs the Polarization Lab. He studies political tribalism, extremism, and social psychology using  data from social media and tools from the emerging field of computational social science.

Read more: