Renée DiResta, Author at NOEMA https://www.noemamag.com/author/renee-diresta/ Noema Magazine Wed, 07 Jun 2023 16:24:38 +0000 en-US hourly 1 https://wordpress.org/?v=6.3 https://www.noemamag.com/wp-content/uploads/2020/06/cropped-ms-icon-310x310-1-32x32.png Renée DiResta, Author at NOEMA https://www.noemamag.com/author/renee-diresta/ 32 32 The New Media Goliaths https://www.noemamag.com/the-new-media-goliaths Thu, 01 Jun 2023 16:35:11 +0000 https://www.noemamag.com/the-new-media-goliaths The post The New Media Goliaths appeared first on NOEMA.

]]>
One of the more remarkable artifacts of late-stage social media is the indelible presence of a particular character: the persecution profiteer. They are nearly unavoidable on Twitter: massive accounts with hundreds of thousands to millions of followers, beloved by the recommendation engine and often heavily monetized across multiple platforms, where they rail against the corporate media, Big Tech and elites. Sometimes, the elites have supposedly silenced them; sometimes, they’ve supposedly oppressed you — perhaps both. But either way, manipulation is supposedly everywhere, and they are supposedly getting to the bottom of it. 

Many of these polemicists rely on a thinly veiled subtext: They are scrappy truth-tellers, citizen-journalist Davids, exposing the propaganda machine of the Goliaths. That subtext may have been true in last century’s media landscape, when independent media fought for audience scraps left by hardy media behemoths with unassailable gatekeeping power. But that all changed with the collapse of mass media’s revenue model and the rise of a new elite: the media-of-one. 

The transition was enabled by tech but realized by entrepreneurs. Platforms like Substack, Patreon and OnlyFans offered infrastructure and monetization services to a galaxy of independent creators — writers, podcasters and artists — while taking a cut of their revenue. Many of these creators adopted the mantle of media through self-declaration and branding, redefining the term and the industry. Many were very talented. More importantly, however, they understood that creating content for a niche — connecting with a very specific online audience segment — offered a path to attention, revenue and clout. In the context of political content in particular, the media-of-one creators offered their readers an editorial page, staffed with one voice and absent the rest of the newspaper. 

The rise of a profitable niche media ecosystem with a reach commensurate with mass media has been a boon for creators and consumers alike. YouTube, Instagram and TikTok have enabled sponsorships and ad-revenue sharing for quite some time — spawning a generation of influencers — but patronage opened additional paths to success. A tech blogger can start a podcast about Web3 with no infrastructural outlay, reaching their audience in a new medium. A Substack newsletter devoted to political history can amass thousands of subscribers, charge $5 a month, and deliver a salary up to seven figures for its author. Pop culture pundits can earn a living producing content on Patreon, and web-cam adult performers can do the same on OnlyFans. Even Twitter has launched subscriptions.

Whatever the kink — from nudes to recipes to conspiracy theories — consumers can find their niche, sponsor it and share its output. This ecosystem has given rise to people with millions of followers, who shape the culture and determine what the public talks about each day.  

Well, their public, anyway. 

The Rise Of Niche Propaganda

Like the media, the public has increasingly fragmented. The internet enabled the flourishing of a plethora of online subcultures and communities: an archipelago of bespoke and targetable realities. Some of the most visible are defined by their declining trust in mass media and institutions. Recognizing the opportunity, a proliferation of media-of-one outlets have spun up to serve them.

In fact, the intersection of a burgeoning niche media ecosystem and a factionalized public has transformed precisely the type of content that so concerns the persecution profiteers: propaganda. Propaganda is information with an agenda, delivered to susceptible audiences to serve the objectives of the creator. Anyone so inclined can set up an account and target an audience, producing spin to fit a preferred ideological agenda. Those who achieve a degree of success are often increasingly cozy with politicians and billionaire elites who hold the levers of power and help advance shared agendas. In fact, the niche propagandists increasingly have an advantage over the Goliaths they rail against. They innately understand the modern communication ecosystem on which narratives travel and know how to leverage highly participatory, activist social media fandoms to distribute their messages; institutions and legacy media typically do not. 

Although the mechanics of who can spread propaganda, and how, has shifted significantly over the last two decades, public perception of the phenomenon has not. People discussing concerns about propaganda on social media frequently reference the idea of a powerful cabal composed of government, media and institutional authorities, manipulating the public into acquiescing to an elite-driven agenda. This misperception comes in large part from popular understanding of a theory presented by Noam Chomsky and Edward Herman in their 1988 book, “Manufacturing Consent: The Political Economy of the Mass Media.” 

“Manufacturing Consent” proposed a rather insidious process by which even a free press, such as that of the United States, filters the information that reaches the public by way of self-censorship and selective framing. Even without the overt state control of media present in authoritarian regimes, Chomsky and Herman argued, American media elites are influenced by access, power and money as they decide what is newsworthy — and thus, determine what reaches the public. Chomsky and Herman identified five factors, “five filters” — ownership, advertising, sourcing, catching flak, and fear — that comprised a system of incentives that shaped media output. 

Media “ownership” (the first filter) was expensive, requiring licenses and distribution technology — and so, the ecosystem was owned by a small cadre of the wealthy who often had other financial and political interests that colored coverage. Second, advertising meant that media was funded by ad dollars, which incentivized it to attract mainstream audiences that advertisers wanted and to avoid topics — say, critiques of the pharmaceutical industry — that might alienate them. Third, “sourcing” — picking experts to feature — let media elevate some perspectives while gatekeeping others. Fourth, fear of catching “flak” motivated outlets to avoid diverging from approved narratives, which might spark lawsuits or boycotts. And finally, “fear” highlighted the media’s capacity to cast people in the role of “worthy” or “unworthy” victims based on ideology. 

Throughout the 20th century, Chomsky and Herman argued, these incentives converged to create a hegemonic media that presented a filtered picture of reality. Media’s self-interest directly conflicted with the public interest — a problem for a democratic society that relied on the media to become informed. 

But legacy media is now only half the story, and the Goliaths are no longer so neatly distinguished. Technology reduced costs and eliminated license requirements, while platform users themselves became distributors via the Like and Share buttons. Personalized ad targeting enabled inclined individuals to amass large yet niche audiences who shared their viewpoints. The new elites, many of whom have millions of followers, are equally capable of “manufacturing consent,” masquerading as virtuous truth-tellers even as they, too, filter their output in accordance with their incentives.

However, something significant has changed: Rather than persuading a mass audience to align with a nationally oriented hegemonic point of view — Chomsky’s concern in the 1980s — the niche propagandists activate and shape the perception of niche audiences. The propaganda of today entrenches fragmented publics in divergent factional realities, with increasingly little bridging the gaps. 

“Positioning of niche media as a de facto wholesome antithesis to the ‘mainstream propaganda machine’ — Davids fighting Goliaths — is a marketing ploy.”

From Five Filters To Four Fire Emojis

As technology evolved and media and the public splintered, the five filters mutated. A different system of incentives drives the niche media Goliaths — we might call it the “four fire emoji” model of propaganda, in homage to Substack’s description of criteria it used to identify writers most likely to find success on its platform. 🔥🔥🔥🔥

In its early days of operation, Substack, which takes 10% of each subscription, reached out to media personalities and writers from traditional outlets, offering them an advance to start independent newsletters. To assess who might be a good investment, the company ranked writers from one to four fire emojis, depending on their social media engagement. Someone with a large, highly engaged following was more likely to parlay that attention into success on Substack. There is no algorithmic curation or ads; each post by the author of a newsletter is sent to the inbox of all subscribers. Substack describes their platform as a “new economic engine for culture,” arguing that authors might be less motivated to replicate the polarization of social media if they are paid directly for their work.

But the four fire emoji rubric inadvertently lays bare the existential drive of niche media: the need to capture attention above all else, as technology has driven the barrier to entry toward zero and the market is flooded with strivers. Getting attention on social media often involves seizing attention, through sensationalism and moral outrage. Niche media must convert that attention into patronage. A passionate and loyal fandom is critical to success because the audience facilitates virality, which delivers further attention, which can be parlayed into clout and money.

There is little incentive to appeal to everyone. In a world where attention is scarce, the political media-of-one entrepreneurs, in particular, are incentivized to filter what they cover and to present their thoughts in a way that galvanizes the support of those who will boost them — humans and algorithms alike. They are incentivized to divide the world into worthy and unworthy victims. 

In other words, they are incentivized to become propagandists. And many have. 

“It seems likely that at least some of the audience believes that they have escaped propaganda and exited the Matrix, without realizing that they are simply marinating in a different flavor.”

Consider a remarkable viral story from January 2023. Right-wing commentator Steven Crowder published a video accusing a major conservative news outlet (later revealed to be The Daily Wire) of offering him a repressive contract — a “slave contract,” as he put it, that would penalize him if the content he produced was deemed ineligible to monetize by major platforms like YouTube. “I believe that many of those in charge in the right-leaning media are actually at odds with what’s best for you,” he told his nearly 6 million YouTube subscribers. Audiences following along on Twitter assigned the scandal a hashtag: #BigCon. 

Underlying the drama was classic subtext: Crowder, the David, pitted against conservative media Goliaths. And yet, the contract Crowder derided as indentured servitude would have paid him $50 million

Sustaining attention in a highly competitive market practically requires that niche propaganda be hyper-adversarial, as often as possible. The rhetorical style is easily recognizable: They are lying to you, while I have your best interests at heart. 

As it turns out, perpetual aggrievement at elites and the corporate profiteering media can be quite lucrative. On Substack, pseudoscience peddler Joseph Mercola touts his “Censored Library” to tens of thousands of paid subscribers at $5/month, revealing “must-read information” that the medical establishment purportedly hides from the public. Several prominent vaccine skeptics — who regularly post about how censored they are — are also high on the Substack leaderboard and in the tens-of-thousands-of-paid-subscribers club.

Matt Taibbi, a longtime journalist who’s also a lead Substack writer, devotes many posts to exposing imaginary cabals for an audience that grew significantly after billionaire Elon Musk gave him access to company emails and other internal documents. His successful newsletter solicited additional contributors: “Freelancers Wanted: Help Knock Out the Mainstream Propaganda Machine.” The patrons of particular bespoke realities reward the writers with page views and subscriber dollars; prominent members of political parties cite the work or move it through the broader partisan media ecosystem.

“The manufacture of consent is thriving within each niche.”

It is an objectively good thing that the five filter model is increasingly obsolete. Reducing the barriers to ownership, in particular, enabled millions of voices to enter the market and speak to the public, and that is an unambiguously good thing. But the positioning of niche media as a de facto wholesome antithesis to the “mainstream propaganda machine” — Davids fighting Goliaths — is a marketing ploy. The four fire emoji model simply incentivizes a more factional, niche propaganda. 

Since the model relies on patronage, rather than advertising, the new propagandists are incentivized to tell their audiences what they want to hear. They are incentivized to increase the fracturing of the public and perpetuate the crisis of trust, in order to ensure that their niche audience continues to pay them, rather than one of their nearest neighbors (or, God forbid, a mainstream outlet). Subscribers don’t have unlimited funds; they will pick a handful of creators to support, and the rest will struggle. 

As attention and trust have fragmented, “sourcing” has also reoriented to ensure that writers feature people who are approved within the bespoke reality they target; for example, there are several different universes of COVID experts at this point. “Flak” is now a veritable gift: Rather than being afraid of it, the patronage propagandists are incentivized to court it. Attack from ideological outsiders are a boon: “Subscribe to help us fight back!” So much of the media-of-one content is defined by what it is in opposition to — otherwise, it loses the interest of its audience. Partisan outlets have long played the fear game, as Chomsky pointed out in the 1980s, encouraging hatred of the other side — but now, the “unworthy victim” is your neighbor, who may have only moderately different political leanings.

The Effect: Lost Consensus, Endless Hostility

The devolution of propaganda into niches has deep and troubling implications for democratic society and social cohesion. It was Walter Lippmann, a journalist and early scholar of propaganda, who coined the phrase “the manufacture of consent” of the governed in 1922, using it to describe a process by which leaders and experts worked alongside media to inform the public about topics they did not have the time or capacity to understand. The premise was paternalistic at best.

However, Lippmann also had reservations about the extent to which “the public” existed; the idea of an omnicompetent, informed citizenry powering functional democracy was an illusion, he believed, and the “public” a phantom. People, Lippmann wrote, “live in the same world, but they think and feel in different ones.” Propaganda was manipulative, even damaging and sinister, Lippmann thought, but he also believed that the manufacture of consent was to some extent necessary for democratic governance, in order to bridge divides that might otherwise render democracy dysfunctional. 

Lippmann’s intellectual rival on the topics of propaganda, the public and democracy was the eminent philosopher John Dewey. Unlike Lippmann, Dewey believed “the public” did exist. It was complicated, it was chaotic — but it was no phantom. Dewey also rightly bristled at the idea of a chosen few wielding propaganda to shape public opinion; he saw it as an affront to true democracy. Instead, Dewey saw the press — when operating at its best — as a tool for informing and connecting the public, enabling people to construct a shared reality together.       

Though at odds in many respects, both Lippmann and Dewey acknowledged the challenges of a fractured public. The two men saw a dissonant public as both a natural state and as a barrier to a functioning, safe and prosperous society. Though they differed greatly in their proposed approaches, they agreed on the need to create harmony from that dissonance.     

One hundred years later, both approaches seem like an impossibility. It is unclear what entities, or media, can bridge a fragmented, polarized, distrustful public. The incentives are driving niche media in the opposite direction.

“Perhaps by highlighting the new incentives that shape the media-of-one ecosystem, we may reduce the public’s susceptibility to the propaganda it produces.”

The propagandists of today are not incentivized to create the overarching hegemonic national narrative that Chomsky and Herman feared. Rather, their incentives drive them to reinforce their faction’s beliefs, often at the expense of others. Delegitimization of outside voices is a core component of their messaging: The “mainstream” media is in cahoots with the government and Big Tech to silence the people, while the media-of-one are independent free-thinkers, a disadvantaged political subclass finally given access to a megaphone … though in many cases, they have larger audiences and far larger incomes. It seems likely that at least some of the audience believes that they have escaped propaganda and exited the Matrix, without realizing that they are simply marinating in a different flavor.

We should not glorify the era of a consolidated handful of media properties translating respectable institutional thinking for the masses — consolidated narrative control enables lies and deception. But rather than entering an age of “global public squares” full of deliberative discourse and constructive conversation, we now have gladiatorial arenas in which participants in niche realities do battle. Our increasingly prominent medias-of-one can’t risk losing the attention game in the weeds of nuance. We have a proliferation of irreconcilable understandings of the world and no way of bridging them. The internet didn’t eliminate the human predilection for authority figures or informed interpretations of facts and narratives — it just democratized the ability to position oneself in the role. The manufacture of consent is thriving within each niche. 

“Manufacturing Consent” ended with an optimistic take: that what was then a burgeoning cable media ecosystem would lead to more channels with varying perspectives, a recognition that truly independent and non-corporate media does exist and that it would find ways to be heard. But Chomsky and Herman also cautioned that if the public wants a news media that serves its interests rather than the interests of the powerful, it must go find it. Propaganda systems are demonstrably effective precisely because breaking free of such a filtered lens requires work. Perhaps by articulating to today’s public how the system has shifted and highlighting the new incentives that shape the media-of-one ecosystem, we may reduce the public’s susceptibility to the propaganda it produces.

The illustration above was first published in FORESIGHT Climate & Energy’s Efficiency issue.

The post The New Media Goliaths appeared first on NOEMA.

]]>
How Online Mobs Act Like Flocks Of Birds https://www.noemamag.com/how-online-mobs-act-like-flocks-of-birds Thu, 03 Nov 2022 16:26:30 +0000 https://www.noemamag.com/how-online-mobs-act-like-flocks-of-birds The post How Online Mobs Act Like Flocks Of Birds appeared first on NOEMA.

]]>
Credits

Renée DiResta is the technical research manager at the Stanford Internet Observatory.

You’ve probably seen it: a flock of starlings pulsing in the evening sky, swirling this way and that, feinting right, veering left. The flock gets denser, then sparser; it moves faster, then slower; it flies in a beautiful, chaotic concert, as if guided by a secret rhythm.

Biology has a word for this undulating dance: “murmuration.” In a murmuration, each bird sees, on average, the seven birds nearest it and adjusts its own behavior in response. If its nearest neighbors move left, the bird usually moves left. If they move right, the bird usually moves right. The bird does not know the flock’s ultimate destination and can make no radical change to the whole. But each of these birds’ small alterations, when occurring in rapid sequence, shift the course of the whole, creating mesmerizing patterns. We cannot quite understand it, but we are awed by it. It is a logic that emerges from — is an embodiment of — the network. The behavior is determined by the structure of the network, which shapes the behavior of the network, which shapes the structure, and so on. The stimulus — or information — passes from one organism to the next through this chain of connections.

While much is still mysterious and debated about the workings of murmurations, computational biologists and computer scientists who study them describe what is happening as “the rapid transmission of local behavioral response to neighbors.” Each animal is a node in a system of influence, with the capacity to affect the behavior of its neighbors. Scientists call this process, in which groups of disparate organisms move as a cohesive unit, collective behavior. The behavior is derived from the relationship of individual entities to each other, yet only by widening the aperture beyond individuals do we see the entirety of the dynamic.

Online Murmurations

A growing body of research suggests that human behavior on social media — coordinated activism, information cascades, harassment mobs — bears striking similarity to this kind of so-called “emergent behavior” in nature: occasions when organisms like birds or fish or ants act as a cohesive unit, without hierarchical direction from a designated leader. How that local response is transmitted — how one bird follows another, how I retweet you and you retweet me — is also determined by the structure of the network. For birds, signals along the network are passed from eyes or ears to brains pre-wired at birth with the accumulated wisdom of the millenia. For humans, signals are passed from screen to screen, news feed to news feed, along an artificial superstructure designed by humans but increasingly mediated by at-times-unpredictable algorithms. It is curation algorithms, for example, that choose what content or users appear in your feed; the algorithm determines the seven birds, and you react.

Our social media flocks first formed in the mid ‘00s, as the internet provided a new topology of human connection. At first, we ported our real, geographically constrained social graphs to nascent online social networks. Dunbar’s Number held — we had maybe 150 friends, probably fewer, and we saw and commented on their posts. However, it quickly became a point of pride to have thousands of friends, then thousands of followers (a term that conveys directional influence in its very tone). The friend or follower count was prominently displayed on a user’s profile, and a high number became a heuristic for assessing popularity or importance. “Friend” became a verb; we friended not only our friends, but our acquaintances, their friends, their friends’ acquaintances.

“The behavior is determined by the structure of the network, which shapes the behavior of the network, which shapes the structure, and so on.”

The virtual world was unconstrained by the limits of physical space or human cognition, but it was anchored to commercial incentives. Once people had exhaustively connected with their real-world friend networks, the platforms were financially incentivized to help them find whole new flocks in order to maximize the time they spent engaged on site. Time on site meant a user was available to be served more ads; activity on site enabled the gathering of more data, the better to infer a user’s preferences in order to serve them just the right content — and the right ads. People You May Know recommendation algorithms nudged us into particular social structures, doing what MIT network researcher Sinan Aral calls the “closing of triangles:” suggesting that two people with a mutual friend in common should be connected themselves.

Eventually, even this friend-of-friending was tapped out, and the platforms began to create friendships for us out of whole cloth, based on a combination of avowed, and then inferred, interests. They created and aggressively promoted Groups, algorithmically recommending that users join particular online communities based on a perception of statistical similarity to other users already active within them.

This practice, called collaborative filtering, combined with the increasing algorithmic curation of our ranked feeds to usher in a new era. Similarity to other users became a key determinant in positioning each of us within networks that ultimately determined what we saw and who we spoke to. These foundational nudges, borne of commercial incentives, had significant unintended consequences at the margins that increasingly appear to contribute to perennial social upheaval.

One notable example in the United States is the rise of the QAnon movement over the past few years. In 2015, recommendation engines had already begun to connect people interested in just about any conspiracy theory — anti-vaccine interests, chemtrails, flat earth — to each other, creating a sort of inadvertent conspiracy correlation matrix that cross-pollinated members of distinct alternate universes. A new conspiracy theory, Pizzagate, emerged during the 2016 presidential campaign, as online sleuths combed through a GRU hack of the Clinton campaign’s emails and decided that a Satanic pedophile cabal was holding children in the basement of a DC pizza parlor.

At the time, I was doing research into the anti-vaccine movement and received several algorithmic recommendations to join Pizzagate groups. Subsequently, as QAnon replaced Pizzagate, the highly active “Q research” groups were, in turn, recommended to believers in the prior pantheon of conspiracy theories. QAnon became an omni-conspiracy, an amoeba that welcomed believers and “researchers” of other movements and aggregated their esoteric concerns into a Grand Unified Theory. 

After the nudges to assemble into flocks come the nudges to engage — “bait,” as the Extremely Online call it. Twitter’s Trending Topics, for example, will show a nascent “trend” to someone inclined to be interested, sometimes even if the purported trend is, at the time, more of a trickle — fewer than, say, 2,000 tweets. But that act, pushing something into the user’s field of view, has consequences: the Trending Topics feature not only surfaces trends, it shapes them. The provocation goes out to a small subset of people inclined to participate. The user who receives the nudge clicks in, perhaps posts their own take — increasing the post count, signaling to the algorithm that the bait was taken and raising the topic’s profile for their followers. Their post is now curated into their friends’ feeds; they are one of the seven birds their followers see. Recurring frenzies take shape among particular flocks, driving the participants mad with rage even as very few people outside of the community have any idea that anything has happened. Marx is trending for you, #ReopenSchools for me, #transwomenaremen for the Libs Of TikTok set. The provocation is delivered, a few more birds react to what’s suddenly in their field of view, and the flock follows, day in and day out.

“Trying to litigate rumors and fact-check conspiracy theories is a game of whack-a-mole that itself has negative political consequences.”

Eventually, perhaps, an armed man decides to “liberate” a DC pizza parlor, or a violent mob storms a nation’s capitol. Although mainstream tech platforms now act to disrupt the groups most inclined to harassment and violence — as they did by taking down QAnon groups and shutting down tens of thousands of accounts after the January 6th insurrection — the networks they nudged into existence have by this point solidified into online friendships and comradeships spanning several years. The birds scatter when moderation is applied, but quickly re-congregate elsewhere, as flocks do.

Powerful economic incentives determined the current state of affairs. And yet, the individual user is not wholly passive — we have agency and can decide not to take the bait. We often deploy the phrase “it went viral” to describe our online murmurations. It’s a deceptive phrase that eliminates the how and thus absolves the participants of all responsibility. A rumor does not simply spread — it spreads because we spread it, even if the system is designed to facilitate capturing attention and to encourage that spread.

Old Phenomenon, New Consequences

We tend to think of what we see cascading across the network — the substance, the specific claims — as the problem. Much of it is old phenomena manifesting in new ways: rumors, harassment mobs, disinformation, propaganda. But it carries new consequences, in large part because of the size and speed of networks across which it moves. In the 1910s, a rumor may have stayed confined to a village or town. In the 1960s, it might have percolated across television programs, if it could get past powerful gatekeepers. Now, in the 2020s, it moves through a murmuration of millions, trends on Twitter and is picked up by 24/7 mass media. 

“We shape our tools, and thereafter they shape us,” argued Father John Culkin, a contemporary and friend of media theorist Marshall McLuhan. Theorists like Culkin and McLuhan — working in the 1960s, when television had seemingly upended the societal order — operated on the premise that a given technological system engendered norms. The system, the infrastructure itself, shaped society, which shaped behavior, which shaped society. The programming — the substance, the content — was somewhat secondary. 

This thinking progressed, spanning disciplines, with a sharpening focus on curation’s role in an information system then comprised of print, radio and the newest entrant, television. In a 1971 talk, Herbert Simon, a professor of computer science and organizational psychology, attempted to reckon with the information glut that broadcast media created: attention scarcity. His paper is perhaps most famous for this passage:

In an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.

Most of the cost of information is not incurred by the producers, Simon argues, but the recipients. The solution? Content curation — a system that, as he put it, “listens and thinks more than it speaks,” that thinks of curation in terms of withholding useless bait so that a recipient’s attention is not wasted flitting from one silly provocation to another.

I dug up the conference proceedings where Simon presented this argument. They include a discussion of the paper in which Simon’s colleagues responded to his theory, making arguments nearly identical to those of today. Karl Deutsch, then a professor of government at Harvard, expressed apprehension about curation, or “filtering,” as a solution to information glut — it might neglect to surface “uncongenial information,” in favor of showing the recipient only things they would receive favorably, leading to bad policy creation or suboptimal organizational behavior. Martin Shubik, then a professor of economics at Yale, tried to differentiate between data and information — is what we are seeing of value? From what was then the nascent ability of computers to play chess, he extrapolated the idea that information processing systems might eventually facilitate democracy. “Within a few years it may be possible to have a virtually instant referendum on many political issues,” he said. “This could represent a technical triumph — and a social disaster if instability resulted from instantaneous public reaction to incompletely understood affairs magnified by quick feedback.”

Though spoken half a century ago, the phrase encapsulates the dynamics of where we find ourselves today: “a technical triumph, and a social disaster.”

Simon, Deutsch and Shubik were discussing one of social media’s biggest fixations more than a decade before Mark Zuckerberg was even born. Content curation — deciding what information reaches whom — is complicated, yet critical. In the age of social media, however, conversations about this challenge have largely devolved into controversies about a particular form of reactive curation: content moderation, which attempts to sift the “good” from the “bad.” Today, the distributed character of the information ecosystem ensures that so-called “bad” content can emerge from anywhere and “go viral” at any time, with each individual participating user shouldering only a faint sliver of responsibility. A single re-tweet or share or like is individually inconsequential, but the murmuration may be collectively disastrous as it shapes the behavior of the network, which shapes the structure of the network, which shapes the behavior.

Substance As The Red Herring

In truth, the overwhelming majority of platform content moderation is mostly dedicated to unobjectionable things like protecting children from porn or eliminating fraud and spam. However, since curation organizes and then directs the attention of the flock, the argument is of great political importance because of its potential downstream impact on real-world power. And so, we have reached a point in which the conversation about what to do about disinformation, rumors, hate speech and harassment mobs is, itself, intractably polarized.

But the daily aggrievement cycles about individual pieces of content being moderated or not are a red herring. We are treating the worst dynamics of today’s online ecosystem as problems of speech in the new technological environment, rather than challenges of curation and network organization.

“We don’t know enough about how people believe and act together as groups.”

This overfocus on the substance — misinformation, disinformation, propaganda — and the fight over content moderation (and regulatory remedies like revising Section 230) makes us miss opportunities to examine the structure — and, in turn, to address the polarization, factional behavior and harmful dynamics that it sows.

So what would a structural reworking entail? How many birds should we see? Which birds? When?

First, it entails diverging from The Discourse of the past several years. Significant and sustained attention to the downsides of social media, including from Congressional leaders, began in 2017, but the idea that “it’s the design, stupid” never gained much currency in the public conversation. Some academic researchers and activist groups, such as the Center for Humane Technology, argued that recommender systems, nudges and attention traps seemed to be leading to Bad Things, but they had little in the way of evidence. We have more of that now, including from whistleblowers, independent researchers and journalists. At the time, though, the immediacy of some of the harms, from election interference to growing evidence of a genocide in Myanmar, suggested a need for quick solutions, not system-wide interrogations.

There was only minimal access to data for platform outsiders. Calls to reform the platforms turned primarily to arguments for either dismantling them (antitrust) or creating accountability via a stronger content moderation regime (the myriad of disjointed calls to reform 230 from both Republicans and Democrats). Since 2017, however, Congressional lawmakers have broached a few bills but accomplished very little. Hyperpartisans now fundraise off of public outrage; some have made being “tough on Big Tech” a key plank of their platform for years now, while delivering little beyond soundbites that can themselves be digested on Twitter Trending Topics.

Tech reformation conversations today still remain heavily focused on content moderation of the substance, now framed as “free speech vs. censorship” — a simplistic debate that goes nowhere, while driving daily murmurations of outrage. Trying to litigate rumors and fact-check conspiracy theories is a game of whack-a-mole that itself has negative political consequences. It attempts to address bad viral content — the end state — while leaving the network structures and nudges that facilitate its reach in place.

More promising ideas are emerging. On the regulatory front, there are bills that mandate transparency, like the Platform Accountability and Transparency Act (PATA), in order to grant visibility into what is actually happening on the network level and better differentiate between real harm and moral panic. At present, data access into these critical systems of social connection and communication is granted entirely at the beneficence of the owner, and owners may change. More visibility into the ways in which the networks are brought together, and the ways in which their attention is steered, could potentially give rise to far more substantive debates about what categories of online behavior we seek to promote or prevent. For example, transparency into how QAnon communities formed might have allowed us to better understand the phenomenon — perhaps in time to mitigate some of its destructive effects on its adherents, or to prevent offline violence.

But achieving real, enforceable transparency laws will be challenging. Understandably, social media companies are loath to admit outside scrutiny of their network structures. In part, platforms avoid transparency because transparency offers less immediately tangible benefits but several potential drawbacks, including negative press coverage or criticisms in academic research. In part, this is because of that foundational business incentive that keeps the flocks in motion: if my system produces more engagement than yours, I make more money. And, on the regulatory front, there is the simple reality that tough-on-tech language about revoking legal protections or breaking up businesses grabs attention; far fewer people get amped up over transparency.

“This overfocus on the substance makes us miss opportunities to examine the structure — and, in turn, to address the polarization, factional behavior and harmful dynamics that it sows.”

Second, we must move beyond thinking of platform content moderation policies as “the solution” and prioritize rethinking design. Policy establishes guardrails and provides justification to disrupt certain information cascades, but does so reactively and, presently, based on the message substance. Although policy shapes propagation, it does so by serving as a limiter on certain topics or types of rhetoric. Design, by contrast, has the potential to shape propagation through curation, nudges or friction.

For example, Twitter might choose to eliminate its Trending feature entirely, or in certain geographies during sensitive moments like elections — it might, at a minimum, limit nudges to surfacing actual large-scale or regional trends, not simply small-scale ragebait. Instagram might enact a maximum follower count. Facebook might introduce more friction into its Groups, allowing only a certain number of users to join a specific Group within a given timeframe. These are substance-agnostic and not reactive.

In the short term, design interventions might be a self-regulatory endeavor — something platforms enact in good faith or to fend off looming, more draconian legislation. Here, too, however, we are confronted by the incentives: the design shapes the system and begets the behavior, but if the resulting behavior includes less time on site, less active flocks, less monetization, well…the incentives that run counter to that have won out for years now.

To complement policy and design, to reconcile these questions, we need an ambitious, dedicated field of study focused on the emergence and influence of collective beliefs that traces threads between areas like disinformation, extremism, and propaganda studies, and across disciplines including communication, information science, psychology, and sociology. We presently don’t know enough about how people believe and act together as groups, or how beliefs can be incepted, influenced or managed by other people, groups or information systems.

Studies of emergent behavior among animals show that there are certain networks that are simply sub-optimal in their construction — networks that lead schools/hives/flocks to collapse, starve or die. Consider the ant mill, or “death spiral,” in which a collection of army ants lose the pheromone track by which they navigate and begin to follow each other in an endless spiral, walking in circles until they eventually die of exhaustion. While dubbing our current system of communications infrastructure and nudges a “death spiral” may seem theatrical, there are deep, systemic and dangerous flaws embedded in the structure’s DNA.

Indeed, we are presently paying debt on the bad design decisions of the past. The networks designed years ago — when amoral recommendation engines suggested, for example, that anti-vaccine activists might like to join QAnon communities — created real ties. They made suggestions and changed how we interact; the flocks surrounding us became established. Even as we rethink and rework recommendations and nudges, repositioning the specific seven birds in the field of view, the flocks from which we can choose are formed — and some are toxic. We may, at this point, be better served as a society by starting from scratch and making a mass exodus from the present ecosystem into something entirely new. Web3 or the metaverse, perhaps, if it materializes; new apps, if all of that turns out to be vaporware.

But if starting from scratch isn’t an option, we might draw on work from computational biology and complex systems in order to re-envision our social media experience in a more productive, content-agnostic way. We might re-evaluate how platforms connect their users or how factors that determine what platform recommenders and curation algorithms push into field-of-view, considering a combination of structure (network), substance (rhetoric, or emotional connotation) and incentives that shape information cascades. This could potentially have a far greater impact than battling over content moderation as a path toward constructing a healthier information ecosystem. Our online murmurations can be more like the starlings’ — chaotic, yes, but also elegant — and less like the toxic emergent behaviors we have fallen into today.

The post How Online Mobs Act Like Flocks Of Birds appeared first on NOEMA.

]]>