afishdrowning:

ao3commentoftheday:

You know that tumblr post that’s like “Adults: why don’t kids go outside?” and then there’s a picture of a very pedestrian-unfriendly street from probably somewhere in the US?

I’ve been thinking a lot about how community seems to be lacking in fandom recently, and over on Dreamwidth people have been making some excellent points. I think modern social media is another place where adults have created a space that is hostile to young people trying to navigate their way online.

I get a lot of asks and see a lot of posts from young people lamenting the fact that they don’t know how to make friends online. Because this isn’t a problem that I experience, I’ve had a tendency to think along the lines of “kids these days” etc. But that’s the easy out. Most of the people I’d consider “online friends” of mine are people I met online several years ago or they’re people I met IRL and we just don’t live near each other so our friendship happens online. I can’t honestly say I’d feel confident in trying to make new friends on modern social media today.

If you start thinking about it more critically, it makes total sense that it’s harder to make friends now. Modern social media has been optimized for “engagement.” The goal of twitter or tumblr or instagram or tiktok isn’t to help users find each other and talk to each other. The goal of those platforms is to keep people on those platforms. The more people they have on their platform, the more money they make. The more time people spend on their platform, the more money they make.

How do you make people spend more time on the platform? You make it as passive and entertaining as possible. Scrolling through tiktok is like channel surfing on a TV in the 90s or early 2000s. Scrolling through twitter or tumblr or facebook is just putting interesting or pretty or funny or angering things in front of your eyeballs until you get bored and switch to the next app, cycling your way through them.

Timestamps are hard to find. Content isn’t chronological. Posts are dropped in on your feed from unknown sources, decided by an algorithm. I wouldn’t be surprised if they did research into how casinos keep people inside and gambling when they made a lot of these decisions.

Each one of these decisions, all on its own, undermines our ability to find and form a community. Each one makes it harder to make friends to have a conversation. It’s hard to get to know someone or have a discussion with them when you have no idea if what you’re saying will be seen by 1 person or 1 million. I’m probably not the only person on this site that feels like I’m either an observer or a performer, but I’m rarely a participant.

The internet used to be a vibrant, weird, wonderful place where communities could pop up and grow. Now, our best shot at community is getting invited to a Discord server and hoping it still exists 2 weeks from now.

Web 1.0 wasn’t perfect in a lot of ways, but I think it was a lot better for community than what we’ve got now.

I just saw an academic paper and Twitter thread 🧵 about this! They focus more on decision-making than friend-making, but the issues preventing effective X-making are the same: the size of “communities” and information directed by algorithm.

image

From Carl T. Bergstrom (@CT_Bergstrom)

1. We have a new paper out in PNAS today, in which we address the harm wrought by dramatically restructuring human communication of the span of a decade, with no aim other than selling ads.

It might be the most important paper of my career.

2. This thread describes the paper and the backstory leading to it. I’ll be posting over the course of the day as I can find the time.

Three years ago, @uwcip postdoc @jbakcoleman organized a summer meeting at Princeton. I attended, and it changed the direction of my research.

3. Think about how you receive information today, compared to fifteen years or twenty years ago.

Social media, internet search, click-based advertising: innovations in information technology and new mechanisms for monetizing information have rewired human communication.

4. The problem is that this enormous transformation has taken place not as a stewarded effort to improve information quality or to further human well-being, but more or less simply for the purpose of keeping people engaged online in order to sell ads.

5. We don’t have a theory for how human decision-making operates in an algorithm-driven online network of comprising billions of souls—and we argue there is no reason to expect some sort of invisible hand is going to bail us out and ensure that good information floats to the top.

6. It is difficult to overestimate the stakes. If these technologies so effectively sow mis- and disinformation worldwide, how we can hope to solve problems such as global warming, extinction, war, food security, pandemics? How can we prevent democracies from crumbling?

7. Aside: The paper has been in the works for over two years. A 2019 draft said something along the lines of “Imagine that pandemic hit and people wouldn’t follow public health advice because of misinformation spread online. We’d be *really* screwed then.”

8. So what is so radically different now compared to twenty years ago, and why does it matter? In the paper, we explore a few factors.

First, scale. We’ve gone from small face-to-face communities to a global network of 3.6 billion social media users in the blink of an eye.

9. We write “Expanding the scale of a collectively behaving system by eight orders of magnitude is certain to have functional consequences. Not only are societies at the scale of ours rare in the natural world; they also are often ecologically unstable where they do form.”

10. Research in opinion dynamics, animal behavior, network epistemology, and statistical physics reveals that the ability to come to collective decisions depends strongly on the size of the group. Bigger does not mean wiser or better-functioning.

11. Second, network structure. Face-to-face networks limit the scope of influence; we can only talk to so many people in a day. On social media it’s different. IRL I couldn’t tell 200,000 people about this paper in an afternoon. More than that have clicked on this thread already.

12. Moreover, preferential attachment processes (often abetted by algorithms, more on that later) accelerate the inequality of influence online.

Network structure differs as well, with more “long ties” online that accelerate the spread of information and disinformation alike.

13. Third, the ease and fidelity afforded by digital communication. Online, messages can be forwarded, re-forwarded, and re-re-forwarded time and again without a loss of resolution, without breaking down in gibberish like in a kids’ game of telephone.

14. All this takes place at essentially zero cost at staggering speed. Someone pens a piece of disinformation after lunch. It gets amplified a few times, and reaches e.g. the leader of the free world later that afternoon. He retweets it to 90 million people, who amplify further.

15. Suddenly a deception is cascading through every corner of the online world. It’s even hard to triangulate to figure out if it’s true, because by this point it’s coming at you from all sides and seemingly from a plenitude of sources.

The very same day it was crafted.

16. Think about the downside of frictionless communication.

Why isn’t your postal mailbox buried in 50 pounds of junk mail every day? Because stamps cost money.

Why didn’t you get long-distance phone scammers in the 1990s? 25 cents a minute back then, that’s why.

17. I want to correct a misperception I’m seeing. Our message is not “ads are evil.”

It’s that social media etc. been designed largely to sell ads, which means it is not designed with care to facilitate the spread of reliable information, let alone improve human well-being.

18. The fourth and final development we discuss is the role of algorithms and algorithmic feedback.

The posts you see on social media—including this thread, I fear—are fed to you by algorithms designed to maximize your engagement, not the veracity of the content you consume.

19. Perhaps even worse, who you *know* on social media is in some large part a function of whom algorithms wanted you to know.

You may like your online friends because they’re cool or whatever, but you *met* many of them because some algorithm thought that would keep you online.

20. And these algorithms know so much about us. The amount of data is staggering. There are hundreds of simultaneous A-B experiments ongoing at any time to figure out what makes us click, collectively….

21. …and to build up a detailed enough profile to predict what makes you click, specifically.

Sure, the results are sometimes risible. But sometimes they’re not. Machine learning is a powerful beast when competently implemented and fed with an endless supply of data.

22. Algorithms create filter bubbles at times. They recommend that I connect with people who thinking like I do, and even if I seek out divergent viewpoints, other algorithms may learn that I don’t engage with them and start to down-rank them in my feed.

23. Other times, algorithms may learn that agreement is boring and conflict is engaging. In the struggle for attention, righteous indignation outcompetes thoughtful discussion. So that’s what we get. In this way they regulate the emotional tenor of the online worlds we inhabit.

24. Now add in the fact that these algorithms are opaque, sometimes even to their creators, and mercurial. We don’t know what they’re doing, or what effects they are having, because the only people who have the data to measure it consider that information too valuable to share.

25. “In sum, we are offloading our evolved information-foraging processes onto algorithms. But these algorithms are typically designed to maximize profitability, with often insufficient incentive to promote an informed, just, healthy, and sustainable society.”

To be continued…

26. I’ll finish up this thread tomorrow. In the meantime, check out lead author @jbakcoleman’s feed and those of my coauthors.

@WolframBarfuss @icouzin @JonathanDonges @andy_gersick @jenniferjacquet @albert_kao @RachelEMoran @kaiatombak @jayvanbavel @elkeweber @PRomanczuk

end 🧵 2021/06/22

This reply by one of the coauthors, Jay Van Bavel, is spot on:

image

E.O. Wilson said it best:

“The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions and godlike technology. And it is terrifically dangerous, and it is now approaching a point of crisis overall.”

(via theladyem)