Digital Society Archives - NOEMA https://www.noemamag.com/article-topic/digital-society/ Noema Magazine Tue, 03 Oct 2023 16:38:47 +0000 en-US hourly 1 https://wordpress.org/?v=6.3 https://www.noemamag.com/wp-content/uploads/2020/06/cropped-ms-icon-310x310-1-32x32.png Digital Society Archives - NOEMA https://www.noemamag.com/article-topic/digital-society/ 32 32 Penelope The Rat https://www.noemamag.com/penelope-the-rat Tue, 03 Oct 2023 16:16:08 +0000 https://www.noemamag.com/penelope-the-rat The post Penelope The Rat appeared first on NOEMA.

]]>
Early in my third trimester, Penelope the rat disappeared and was presumed dead somewhere in my home’s walls.

Penelope was not a particularly notable rat before her disappearance. She was about a year and a half old. Like all Algernon Project rats, her coat was the slate blue that I’d dyed my hair to match (though since pregnancy I’d gotten a little lazy and my roots were showing). She had a splash of pure white around the eyes and nose that gave the appearance of a mask. She’d not done exceptionally poorly or well in the vocabulary or aptitude tests. Unlike my favorites, she didn’t seek out human companionship. She wasn’t shy or human-adverse so much as independent, checking in and then going about her daily rat business (mostly horsing around with her favorite cagemate, Jasmine).

Still, I cared for her and mourned.  The loss was also an embarrassment for the Algernon Project. BabbleLinks are an exorbitantly costly A.I. cross-species communication system.

In my defense, I was housing fourteen rats across three large cages when Penelope disappeared. I’d never lost a rat despite having been a part of the program for years, and I’d long since stopped conducting rigorous headcounts. After free-roam, everybody mostly was eager to return to their preferred hammocks for mid-morning naps. At most, I’d note whether the younger rats — the ones who still explored with vigor — were all in their cages. But Penelope didn’t have the demeanor of a runner.

My husband Peter scolded the Algernon Project for not having included basic trackers in the implants. In truth, more surveillance had hardly seemed necessary. Video cameras were everywhere (I’d watched Penelope’s moment of escape many times). So long as the rats remained in close range, an LCD screen on the wall displayed their brain activity, vitals and transcribed everything they said.

And so, I was relieved a week later when I found Penelope sleeping in the curve of a running-wheel stored beneath her preferred cage.

I gently petted her awake. She yawned and stretched her paws forward — as though she’d only been away for a minute.

“Where have you been, Penny?” I asked through the BabbleLink.  The human end of the BabbleLink was a bone conduction headset that translated their chitters and ultrasonic frequency tones. When humans spoke, their headset communicated with the BabbleLink implants in the project’s specimens and created bone conduction sounds for the rat designed to appear to emanate from the human wearing the headset.

She crawled into my palm lazily, anticipating that I’d transfer her to the cage. I had bandages on the back of my hand covering a new tattoo of a neuron. The rats all loved worrying it and Penelope was no different. She busied herself tearing at the gauze while I inspected her coat for injuries and fleas. My tattoo was mostly healed and, rather than pain, her nips created a physical tingle in me that mirrored my excitement over her miraculous return from the dead. I combed out the few fleas I found on Penelope, but otherwise, she was in remarkably good shape. 

“Where have you been?” I asked again. “Why did you leave? How did you survive?” This was bad form. Multiple questions with less familiar words like “survive” often led to muddy answers.

I placed her next to the communal food dish and watched her wolf down lab blocks — ignoring me. Then I listened to the familiar click-clank-click of her drinking from the water bottle.

I worried that her BabbleLink implant had been damaged, but eventually, her answers flowed in through my headset. “Mango?” she asked with urgency — an emotion signified by her faster, higher-volume speech.

In some ways, this was unremarkable. A good chunk of our newfound ability to communicate with animals involved relating culinary desires. Every rat I had ever known requested the same four foods: peas, corn, nut butters and avocado. Beyond that came individual preference. Long ago, I’d laid down ground rules that my rats could state food preferences only after I said “Requests?” Otherwise, the BabbleLink became overwhelmed. My rules were not always honored by the rats, but ignoring their unsolicited demands helped hold the line. 

In other ways, Penelope’s request was peculiar. While she was a fan of fruit, she’d never requested mango by name before. She also was not particularly food-motivated and rarely made unprompted requests. She was polite. But she’d just returned from an adventure and so I indulged her.

“BabbleLinks are an exorbitantly costly A.I. cross-species communication system.”

“It’s frozen. Give it some time to warm in the water or you’ll hurt your tongue.”


Penelope’s requests for mango continued in the days that followed, as did her heavy appetite. I rationed lab blocks for the first time ever. Typically, animals didn’t overeat lab blocks because they weren’t very appetizing. This restriction, however, worsened her behavior. I caught her stealing food from friends. She stopped coming out for free-range time and slept more. She was curt and sometimes downright touchy when questioned about why she ran away, where she’d gone and why she returned.

I didn’t panic. Prey animals are hesitant to reveal what bothers them and the BabbleLink doesn’t change this instinct. Yet at the same time, traumatic events could change a rat’s personality. I’d witnessed this when a friend of theirs died after hind-leg degeneration injuries or even just as they adjusted to the indignities of rodent aging. I’d had an energetic rat pup suffer an electric wire shock that left him afraid to leave the cage. From the outside, he’d just seemed to spiral into a spontaneous existential funk. After a week of playing rattie-therapist, he confessed what had happened. I showed him how to avoid shocks, double-checked for other wires and promised he’d be safe. He recovered.

I held out hope that Penelope could similarly be coaxed to talk with patience, even as she denied being lethargic or that anything hurt. Vitals revealed nothing amiss. I chalked the changes up to overexcitement, suspecting that they might subside.

After another week, the lethargy broke. A flurry of activity followed where she requested paper towels, tissues and cotton balls. She built a fort of sorts. This new interest in engineering led to a fresh conflict on cage-cleaning day. Cage-cleaning day was always unpopular, and so I typically waited for everyone to be out playing. But now Penny no longer left the cage; instead, she sat territorially in her little fort and refused to come out — even giving a warning nip at my finger.

“Please, Penny,” I said.

“Sharon,” she said back, in a manner that I swear sounded like sarcasm.

“You can rebuild your fort. The cage is dirty.”

She stared back stubbornly. I sighed, and pet behind her ear then down to her rump. She didn’t relax, but also didn’t protest with another nip.

“Your belly has gotten so big,” I said. “My God.” My hand shot away, sending Penelope’s hair up in high alert. “Penny, you’re pregnant.”

Her body went slack and she chittered. The AI translated this as laughter — perhaps in response to my obliviousness. 

“I didn’t realize,” I said. “I won’t clean. Hold on.” I got her a fresh mango meant for me. She ate the messy fruit straight from my hand and everything seemed forgiven.

Her head tilted. “You can’t smell my babies?” she asked.

“No,” I said. “You can smell babies?”

“I smell your babies.”

My baby. Six months along. I hadn’t mentioned him to the rats, but suspected they were aware. They were newly curious about my swollen belly.

Penelope’s comment was a breakthrough. Plenty of evidence existed that animals anticipate the future (something unsurprising to anyone who’s seen a dog excited for a walk), but expressing awareness of a future childbirth was sophisticated anticipation.

“I can’t smell your babies. Your nose is stronger than mine. Do you know how you became pregnant?”

“Babies.”

“Yes, babies. But do you know how they got in your belly?”

Penelope was silent. The data on the LCD screen attached to the wall showed that her brain was trying to process, then abruptly the attempt stopped and she grew distracted.

“You met someone out there,” I said, guiding her toward an idea.

“Fernando,” she said. Fernando was Penelope’s former favorite elder cagemate. He had passed away a year before. For a mad moment I imagined he’d actually escaped and was living in my walls, but I’d seen him peacefully gassed after his cancer spread.

“Fernando who used to live with us?”

“No. Fernando. Fernando.” she repeated insistently.

“Okay. This new Fernando, he was nice? You got along with him?”

“Fernando smelled good.” This was her way of expressing affection, rather than concept words like love — the cagemates she cared for smelled good. Disfavored cagemates smelled bad.

“Could he speak?” I asked.

“Mute.” Mute was their word for a rodent without an implant. Rats didn’t use BabbleLinks with each other directly, but they knew who had implants. The project hadn’t figured out how the rats could tell — but I suspected it was listening to their cagemates’ vocabulary usage and observing how their companions interacted with humans.

“She yawned and stretched her paws forward — as though she’d only been away for a minute.”

“Do you think this is why you got pregnant? From meeting Fernando?”

The brain activity scanner didn’t tick up this time. “Sleepy,” she said. Some rats would say I don’t understand. Penelope’s habit was that — rather than admitting confusion — she claimed to be tired. Pride, perhaps. Or she was tired. Already, this was the longest conversation Penelope had ever had with me.

I nodded. “Was life difficult while you were gone?”

Her brain activity flared up again. “Fernando hungry. Thin. Penelope hungry. Poison food. Poison.”

This was a stunner. Had I mentioned poison? Never. I’d once inadvertently offered them spoilt peas, but probably called them moldy. But then again, poison would be central to a wild rat’s life, so of course some word would exist.

I pet her firmly, resisting the urge to squeeze her in apology for rationing her food when she’d first come home,  and for all the poisoning that humans had done across history.


My gestating baby was most active at night. The evening Penelope returned, I felt him kicking while considering this cross-species conversation. I internally debated whether my baby triggered Penelope’s elevated maternal drive.

The Project had set up the experiment with neutered bucks (male rats) and “intact” does (female rats). The bucks’ surgery was done simultaneously with BabbleLink implanting. This had been a compromise result so that lab leaders could commingle the sexes without propagating a million rats, while also preserving the ability to continue the biological strains the Algernon Project had carefully bred.

Neutering is less invasive than spaying and lowers buck aggression. Although leaving the does intact raised tumor risk slightly, that seemed worth avoiding surgery risk and preserving some gene lines. But it did mean the does had weekly cycles where they grew hyperactive and harassed the bucks with futile mating rituals, sometimes mounting them as if demonstrating what they should be doing — teaching steps to a dance these bucks would never learn.

Generally, I tried not to think about my rats as sexual beings. Normal things easily got weird when I did. For example, some rats enjoyed being tickled during playtime. Yet some female rats only requested tickling while in heat. What to do with that data? It was best to ignore the ramifications. Usually, I just ended up tickling them and moving on with my day.

But having a pregnant rat who had confided in me the details of a forbidden coupling was something else. The incident made me suspect that Penelope was far more intellectually capable and resourceful than I’d realized, to the point that she’d even kept her cleverness secret. What if she’d seen me pregnant, conspired to escape, and go on a hero’s journey to create her own parallel pregnancy and children?

I woke Peter even though I knew this would annoy him, and recounted the day’s events. As I spoke, he traced my linea nigra — that mysterious line that appears during the second trimester. Mine was thick and rich and ran all the way from my belly button down to my pubic hair.  

When I finished the recounting, he sighed. “That’s everything?”

“That’s not remarkable enough for you?”

“A tale as old as time. A small-town lass has no viable men around, so she sets out to the next town over. In the human version, six months later she’d show up at her parents’ door crying that Fernando had jilted her. For rats, I’m sure you can find many Reddit pages asking what to do if your pet rat escapes and then returns pregnant.” He sighed again, then asked, “Well grandma, are you going to let Penelope keep the litter?”

“What do you mean?” I was half-horrified he’d considered any other option, but he was correct that the Project might request this.

“They’ll be half wild. They might harass their tame cage-mates. They might bite them or you. Imagine having a half-wolf in a dog pack.  And the expense of maybe a dozen more unplanned Babble chips might concern the Algernon Board.”

“I can’t believe I rationed her food. So stupid, the pregnancy was so obvious. You’d think I was a hobbyist.”

“Obvious once you saw it.” He rubbed my back. “She’s eating plenty now. Much more overall than if she’d stayed away.”

“She was on calorie restriction for nearly half of her pregnancy.”

He scratched the surface of my bump. Our boy was doing bicycle wheels in there. “An understandable blind spot. You were thinking your rats were a new thing, separated from the wild world — which they are in many ways. But they are still also that old thing.”

“A good chunk of our newfound ability to communicate with animals involved relating culinary desires.”

My friends sometimes suggested that Peter was the rational one and I was more emotional. He was ex-military, and many read rationality into his good posture and understated delivery. But I knew him well enough to see past this.  He had wept uncontrollably for almost 20 minutes after I told him he was going to be a father. It was so unexpected, so raw, that it took some time to realize he was happy and merely overcome with emotion. The child had been planned and conceiving hadn’t been difficult compared to many couples in their 30s. This outburst had made me love him more, which annoyed me. Crying over something like that shouldn’t make him more worthy of love — but some primal place in me was stirred and reassured by his display.


I spent several days drafting an explanatory email to the Project’s program director.  In the meantime, Penelope consented to having her makeshift nest moved to the maternity cage where only Jasmine, her best friend, was allowed to join. I showed Penelope our baby room from the perch on my shoulder. Rat eyesight is weak, so I had to take her up close to see the cradle, the glider where I’d nurse and the little stuffed rat toys everyone had sent as gifts. I explained this was my nest, feeling a little silly given that Penelope’s overwhelming experience of it, for now, was likely the unpleasant smell of off-gassing. Still, she was chatty and curious, asking why the baby wouldn’t sleep with Peter and me. Contrary to her pre-adventure behavior, she wanted to talk all the time.

I watched Penelope labor in real time from the next room via video screen, as she moved every which way, trying to find a comfortable spot. Something in her breathing suggested birth was imminent. This squirming stage lasted for about half an hour. Then, slowly, eight pink pups came out and immediately squealed. She licked the squirming pink mass of babies clean, chewed through the tiny umbilical cords and consumed their still-throbbing placentas. And like that, it was done. No epidural, no forceps, no c-section, no doctor or midwife shouting “push!”

After she woke from a long nap, I approached the maternity cage with mango. Penelope ignored it and instead dragged me by the finger where the blind, hairless pink pups were stacked. I dutifully pet them. Her BabbleLink transmission was an endless loop of their names, all of which were old cagemate names plus Peter and Sharon. (Perhaps rats simply don’t have many name sounds?) Penelope kept noting how good the babies smelled.

I sent my email to the Board only once the birth was complete, ensuring that aborting wasn’t an option.

I was invited to present my case at the next online meeting of the Project’s Board. 


STEVEN (SECRETARY): This is the time and the place designated for our April 10, 2030, meeting of the Board. We have present five Board members. One non-present voting member has delegated her right to the Chair, Emily Spiro. The sole item on our agenda is whether to grant a one-time expenditure for BabbleLinks for an additional litter of unexpected half-wild rats. 

EMILY (CHAIR/ BOARD SEAT #1): I think we are going to hear briefly from Sharon Esposito, the leader of Lab Number 26. Right, Sharon? Are you in the meeting?

SHARON (LAB LEADER # 26): Here.

EMILY (CHAIR/ BOARD SEAT #1): Oh, I love the tattoo on the back of your hand. Is that a neuron?

SHARON (LAB LEADER # 26): A rat brain neuron. I did my graduate research on rat consciousness as well.

EMILY (CHAIR/ BOARD SEAT #1): Fitting. Plus, your hair is the same color as the rat coats. Classic. So, what would you like the Board to consider?

SHARON (LAB LEADER # 26): Yes. So, Penelope the rat escaped for one week and was impregnated by a wild rat. She successfully gave birth to eight pups earlier this week: three girls, five boys. The mom and babies all survived and appear healthy. As a result of her sojourn, Penny has become more expressive and is providing insights into maternal behavior. I’d love to see where this takes us and I believe having implants for the babies will continue us on that path, and provide insights into wild rat minds. Letting me raise half-wilds would be a good half-step.

EMILY (CHAIR/ BOARD SEAT #1): Thank you for that. I’ve spoken informally with the rest of the Board and I can say approving money for the half-wild litter won’t be an issue.

“While she was a fan of fruit, she’d never requested mango by name before.”

SHARON (LAB LEADER # 26): Great! You don’t know what a relief that is.

EMILY (CHAIR/ BOARD SEAT #1): Of course! Your experience already shows that there’s much we can learn from expanding our data set. The revelation that wild rats seem to have a pre-existing word for “poison” is fascinating.

SHARON (LAB LEADER # 26): Surprising yet intuitive. We know wild rats communicated poison risk. They’ve been observed designating a “taster” when encountering new foods and smelling each other’s breath to memorize the scent, then avoiding similar foods when the taster got sick.

EMILY (CHAIR/ BOARD SEAT #1): Fascinating. For the benefit of the group, do we have an idea of how much transfer of knowledge there is among your rats, say intergenerationally?

SHARON (LAB LEADER # 26): We know they use the sounds we teach them through the BabbleLink with each other. This builds a vocabulary that outpaces their non-linked peers. Each successive generation is more sophisticated. For example, we see little things like the older rats teach the younger ones where the designated toilet areas are and we don’t need to potty train each new generation.

EMILY (CHAIR/ BOARD SEAT #1): Very helpful. Now, the reason I ask all this is because we got a call from a key funder with anxiety around the accidental release of super rats who understand how bait works and who is setting it. The information we give these augmented lab rats will spread to the wild population. We want to implement some additional mitigation measures, and we’d very much like to say in the press release that you and the other Lab Leaders support them — plus mention your rat’s outside adventure.

SHARON (LAB LEADER # 26): What kind of measures?

EMILY (CHAIR/ BOARD SEAT #1): First measure, we’ll ask that you spay the half-wild does when the bucks are neutered. This is because the half-wilds are seen as a greater flight risk because they aren’t bred for docility and their mom has already shown that capability. We don’t want them to continue their line in the wild if that happens.

SHARON (LAB LEADER # 26): Okay.

EMILY (CHAIR/ BOARD SEAT #1): Second, we’ll put in guardrails to make sure that escapees won’t be able to interfere with pest control. He compared the risk of rats escaping to gain-of-function lab leak risks, because your labs contain enhanced species that if introduced to wild environments could quickly spread and dominate standard species due to their communication advantages.  And I agree that we need to think through what happens if an evolution we engineer spreads in the general community. So until further notice, we are asking that you not share anything with your rats about the history of lab work or engage in any data sharing about how poisons work or how to spot them. Current subjects with this knowledge must be isolated from younger generations until natural death.

SHARON (LAB LEADER # 26): Is this necessary? We’re not talking Planet of the Apes. Their communication is still mostly monosyllabic.

EMILY (CHAIR/ BOARD SEAT #1): So long as the BabbleLink is mechanical and non-hereditable, I agree that sophisticated language evolution risk is low. But as your own experience suggests, these are social creatures that teach each other, and we are only now getting a loose grasp on their language capabilities. Any trait that improves survival chances could quickly dominate and frustrate rodent control.

SHARON (LAB LEADER # 26): Still, you’re talking about isolating twilight rats, which can be stressful.

EMILY (CHAIR/ BOARD SEAT #1): As to your colony, this concern is purely theoretical as our database records show that your Penelope was the first in your colony to use the term “poison,” and that word was introduced to her by a wild rat, rather than vice versa.

SHARON (LAB LEADER # 26): So you’re saying Penelope would not need to be separated under these new guardrails?

EMILY (CHAIR/ BOARD SEAT #1): Correct.

SHARON (LAB LEADER # 26): That’s fine then. I wasn’t planning to teach my rats molecular biology. I don’t want to hold this up.

EMILY (CHAIR/ BOARD SEAT #1): Great. I think that’s it. We are ready to vote on our agenda.

ANAMARIA (BOARD SEAT #4): Hold on. Hold on one second. I have a right to enter my dissent into the record. As the saying goes, great causes have a habit of becoming businesses and then degenerating into rackets. Our vote today completes a shift from the Nomadic Labs, the movement, to Algernon, LLC the business. While I can see everyone currently on our Board still believes deeply in animal welfare, unless we hew back to our initial purpose, I fear the racket is imminent. 

ASHIM (PROJECT DIRECTOR): Missions change and—

“Prey animals are hesitant to reveal what bothers them and the BabbleLink doesn’t change this instinct.”

ANAMARIA (BOARD SEAT #4): I didn’t interrupt you, please let me make my record. I feel I need to remind everyone, as the last remaining member of the old guard and the only Board member with gray hair, of our initial mission. Though we do business as the Algernon Project, our legal name remains NRNL. As in Natural Research Nomadic Labs.

Our original purpose was to design an ethical way to conduct rodent experiments by raising them as quasi-pets. To give them good lives, love them and gather data as injuries and illnesses naturally arose. With enough volume, our hope was that this “natural research” would provide more scientifically accurate results because during prior rat research the animals’ anxiety, poor health and depression confounded results. The variation of domestic environments would help because humans, after all, do not live with standardized diets or habits.

But now, money has warped this simple idea into one involving expensive AI equipment. The project’s namesake, Algernon, typifies this. Only one Algernon was ever introduced to the public, a charismatic rat with a slate blue tint to his coat. The dirty secret was that hundreds of Algernons lived unpleasant lives to develop the technology, and more are suffering in labs now to upgrade it. Even the implantation procedure has a mortality rate we’d never accept in human babies. So, in effect, we’ve moved from protesting cruel research to funding it.

EMILY (CHAIR/ BOARD SEAT #1): We’ve all read your book, AnaMaria. Is this really necessary to repeat here?

ANAMARIA (BOARD SEAT #4): Absolutely it is. If we want to eliminate animal testing, the best way to do that is to not test on animals. These new protocols are going to put us back in the place of lab worker/lab rat dynamics. For the first time, Sharon is being asked to limit what she can communicate to them for the purpose of, what, helping pest control agencies maintain their jobs and out of a fear that rodents will become too conscious of what is being done to them? This is not how we treat pets. And why are we expanding our mandate to include studying wild rats? Why are we monitoring their intergenerational communications about poison? What use is that data except for rodent control. Let poison manufacturers do their own studies.

EMILY (CHAIR/ BOARD SEAT #1): Is that all?

ANAMARIA (BOARD SEAT #4): That’s all.

EMILY (CHAIR/ BOARD SEAT #1): Well, I didn’t expect to have debate club today, so I didn’t prepare a formal rebuttal and don’t speak for the full Board but let me just say a few things for the record. First, I’ll just note that AnaMaria and her late husband had years to create a sustainable model without an AI component. As we know, enough rats with naturally occurring conditions must be presented to research anything meaningfully. But the pure natural model never got enough participants to hit those statistically significant thresholds. Outside the welfare community, researchers’ habits are sticky. They were trained torturing rats, their teachers were trained torturing rats and they’ll instruct their students to torture rates unless there are quantifiable benefits to transition to a different system.

Using AI as a carrot, we have enough participation to create usable results for behavior studies, common illnesses and nearly all major cancer research. So, while my esteemed colleague AnaMaria remains an inspiration to me and a friend — and I do envy her ideological purity — she is, unfortunately, comparing a theoretical nonprofit of unproven viability with an actual operating one that needs to make complex ethical choices. We have good data demonstrating that giving animals the ability to say “that hurts, please stop” changes researcher behavior — including the behavior of researchers not directly working with our chipped subjects.

Finally, contrary to what AnaMaria implies, we are not a for-profit corporation and nobody affiliated with us stands to make money from today’s decisions. We are accommodating donors not because we have lost the faith or are “selling out” but rather because they are correct. Ideologically, we don’t want to make rodent control harder when we all know excess wild rats disproportionally impact poor urban areas. Okay, that was a lot. Secretary, are we ready to vote in today’s measures and unplanned expenditures?

STEVEN (SECRETARY): I’m ready. On today’s agenda items A-1 and A-2 and A-3, Board seat two, how do you vote.

ANATOLY (BOARD SEAT #2): I vote aye to all measures.

STEVEN (SECRETARY): Board seat three?

“His solution of banning all animal testing was clean; and it would never be implemented.”

STACY (BOARD SEAT #3): Aye to all measures. Thank you for that discourse, both of you. I thought it was helpful.

STEVEN (SECRETARY): Seat four?

ANAMARIA (BOARD SEAT #4): Nay.

STEVEN (SECRETARY): Seat five?

JOHNNY (BOARD SEAT #5): Aye.

STEVEN (SECRETARY): And the chair?

EMILY (CHAIR/ BOARD SEAT #1): I vote aye.

STEVEN (SECRETARY): The resolutions pass 4-1. This meeting is adjourned.

EMILY (CHAIR/ BOARD SEAT #1): Okay everyone, thank you for your time. Sharon thank you for your patience, don’t be surprised to see your name on the press release for the new protocols. We’ll get you a draft before the notice goes out so you’re comfortable.

SHARON (LAB LEADER # 26): Sounds good.

EMILY (CHAIR/ BOARD SEAT #1): Great. See the rest of you at the annual benefit.


I shut my laptop screen and turned to Peter, who had listened off-screen.

“What do you think?” my husband asked.

 “We’ll see what the press release says. It might be fine. I’ll probably sign off.”

“That’s it?”

“I mostly found myself agreeing with whoever was speaking. I hadn’t really intellectually separated the AI piece of it from the distributed labs piece beforehand. And I guess I dissected rat brains for my PhD research, so I don’t have the high ground. Maybe I’m exhibit A of the person Emily imagined when she said the AI gets people in the door through curiosity or whatever. Ultimately, I suppose I’m more concerned for Penny and my colony. This outcome seems to work okay for them.”

He nodded slowly, not exactly agreeing — more contemplating. 

“I guess I’m sad that the half-wild ones aren’t going to have their own pups someday,” I said. “But a small price, all told, for the life we give them. An abstract thing to steal from a small creature who may not know what they’ve lost — having children or grandchildren. Wild rats rarely live to meet their grandchildren.”

He kept up that slow, unconvinced nod.

“The thing I hadn’t thought about is really all that surveillance. The fact that they could look and see I hadn’t already discussed poison with the rats. That’s strange and drives home that this is a massive research project, not a hobby. I know that privacy isn’t a fair expectation and caring for the rats is in some sense a job. But I don’t think of it like that when I’m interacting. What do you think?” 

“What I’ve always thought. Very little that’s useful can be learned about humans from studying rats and that we shouldn’t do it.” He stood to leave the room. “That said, studying you looking after your rats lets me know our son will have a loving, caring mother.”

“Then the experiment was a success,” I said, and half smiled.

He winked at me, rubbed my shin, kissed my belly and headed out.

I resented that his beliefs were so pure and simple — like AnaMaria, his absolutism freed him from complex, messy moral choices. Instead, his solution of banning all animal testing was clean; and it would never be implemented. 


And so, I signed off on the press release.

Penelope’s pups grew. Their pink skin gained dark brown fur that easily distinguished them from the classy blue tint that marked the main Algernon line. As Peter predicted, they behaved half-wild. They ran around saying, “Hey!” “Hey!” “Hey!” as they bumped into each other and play-fought (before the operation, we could roughly translate words, but the pups couldn’t understand us).

When the day came, and they were both fixed and received their implants, a pup did die — little Peter, sadly. As was my practice, I left Peter’s body in the cage so the others would know he had passed away. Penelope licked at Peter, as if trying to wake him. After a few minutes, she gave up and thereafter ignored the corpse. She never mentioned the loss to me.

Another of the bucks was too aggressive — raising his fur and hissing if I got close to him, chasing, shoving and pinning his brothers, and generally making life miserable for everybody— and needed to be separated. He finally was put down after he nipped Peter (the human) hard enough to draw blood. The rest grew to be physically and emotionally healthy, if otherwise unremarkable, adults.

“They were taken too soon. Little lives in fast forward.”

While they were pups and it was unclear whether their wild side would make them too aggressive to be near, I kept Penelope’s brood with just Penelope and her best friend Jasmine, who also started lactating and helped with the caretaking, but once they were neutered and past the asshole-teenage stage, I decided it was safe to reunite the colony. This was just around the time I gave birth to my son Jackie. Penelope loved licking him.

Another few months passed and Penelope acquired stately greys to go with her blue coat. They came in a slightly different color from the vibrant white splash around her nose and eyes. She grew even chattier than before. The rats I was closest with often grew extra chatty toward the end. This began around the two-year mark when their bodies started winding down, entering into their twilight age. During this period, Penelope who rarely sat still when younger, would sit on my shoulder and listen to the younger ones play, peering down in their direction. We talked and talked but in simple sentences. Age made Penelope’s thoughts lose sharpness.

I’d given up drinking for the pregnancy but had resumed after Jackie arrived. I learned from the internet that the safest time for a nursing mom to drink was actually while nursing so that the alcohol would be out of my breastmilk before the next session. One night, while feeding Jackie, I had a little more wine than usual and I got chatty with Penelope.

I brought up Penelope’s escape into the walls.

“Me?” she said, surprised.  

“You don’t remember?”

“No,” she said when I asked if she remembered anything, then chittered, seemingly amused by her younger self’s brashness.

“No memory of Fernando in the walls?”

“Fernando. Son.”

She fell asleep, and so did Jackie, and I put him down in the cradle, returned to the rat room with a fresh glass of wine and watched the young ones play.

They slept so many hours in the day, these pet rats, with their three-year lifespans — that’s just when human kids started to know anything of the world around them, and they had more waking hours in that time. Life extension was the key to learning what rats could fully evolve into.  But what scientific use was there in breeding long-lived rats? The project was probably something you could sell to some Silicon Valley guy who didn’t give a damn about rats but would pay a million dollars to stay young or simply live, for one more day.

They were taken too soon. Little lives in fast forward. Penelope was in her twilight period and firmly content, like a human retiree watching the manatees in the canal out the back porch, nowhere to go, no plans to be made, naps pleasantly sneaking up and weighing down their eyelids, as they slowly left this earth, a little less present each day while the young tried to squeeze out more moments, more memories, shaking them and startling them awake to announce dinner, startling them awake to say I love you. One last time.

Penelope yawned back awake, and I moved her to my lap and gave her a nut to puzzle open.

I wanted to tell her about the poison out there in the wild world and everything humans had done, mostly because I had been told I couldn’t tell her that. But I knew all this was recorded, surveilled, tabulated and that I could lose my place as a lab leader.

What came out was something else:

“I know you’re not going to understand this Penny, but when I was in grad school. When I was learning how to be a scientist, I was doing research. I didn’t feel comfortable turning projects down — despite the way they had us treat the rats. I didn’t have the power or the awareness that rats were like you. I did things that were unkind.”  As I spoke, I spilled a little of my wine on the baby’s swaddle, triggering another wave of guilt. 

Penelope’s brain scan was going up and down, I think more in response to the anxiety and seriousness she heard in my voice. Then, as I kept talking on, her brain scan didn’t beep at all, as though the effort had exhausted her. Penelope was just letting my words flow over her now.

I teared up.

Then Penelope said, “You smell good, you smell good, you smell good,” as if to soothe me.

The post Penelope The Rat appeared first on NOEMA.

]]>
The Next Netflix Should Be Owned By Screenwriters https://www.noemamag.com/the-next-netflix-should-be-owned-by-screenwriters Thu, 21 Sep 2023 16:12:13 +0000 https://www.noemamag.com/the-next-netflix-should-be-owned-by-screenwriters The post The Next Netflix Should Be Owned By Screenwriters appeared first on NOEMA.

]]>
Most of us likely believe that if we found ourselves in an unfair, untenable circumstance, we would not hesitate to take the necessary steps to stand up for ourselves and demand appropriate respect and fair treatment.

Science, however, doesn’t back up this notion. The truth is humans are hardwired to prefer the status quo and will tolerate a hell of a lot to preserve it — even when presented with other, objectively better options. Such is our resistance to the uncertainty that always accompanies change.

We’re thinking about this fact in regard to the strike against film and television studios by the Writers Guild of America and SAG-AFTRA, a labor union representing roughly 160,000 actors, artists and media professionals in the entertainment industry. Is the innate human reluctance to upset paradigms keeping Hollywood workers from going as far as they could — and maybe should — to secure their futures?

We believe wholeheartedly in the purpose and the place of labor unions, but we also came of age in an era when organized labor proved hugely vulnerable to the machinations of both government and the corporate sector. In 1981, then-President Ronald Reagan invoked an arcane law to summarily fire more than 11,000 striking air traffic controllers, in the process signaling to corporate America that it was open season on unions.

And corporate America took the hint: Between 1947 and 1979, there was an annual average of 200 to 400 major strikes. Less than a decade later, at the end of Reagan’s second term, that number dropped to just 40.  Unions realized that corporate hardball practices — verging on, or even crossing, the line of legality — would rarely, and only minimally, be prosecuted or admonished politically. 

Ironically, about two decades before firing the air traffic controllers en masse, Reagan served as president of the Screen Actors Guild and led the union in a joint strike with the Writers Guild of America, helping to secure the system of residual payments that, until the advent of streaming, ensured writers and actors were paid fairly for their work.

In the last few years, organized labor has seen a revival in the United States. Workers have attempted, with varying degrees of success, to organize at corporate leviathans like Amazon and Starbucks. They’ve struck in the California educational system, at The New York Times and at major U.S. automobile factories. A 2022 poll indicates 71% of Americans approve of labor unions, a dramatic increase from 48% a little over a decade ago. 

All of this is ultimately good. But does organizing workers go far enough?

Can Americans really feel and be secure when fundamental top-down and capitalistic-focused structures continue to dominate the U.S. corporate landscape? U.S. workers today are still vulnerable to the pique of their bosses and executives, typically lose ownership of any intellectual property they produce, and serve to support an oligarchy that siphons off the wealth created by their labor — often regardless of whether they’re unionized or not.

Of course, it’s not usually feasible for rank-and-file wage workers to simply leave their jobs and strike out on their own. Having a choice between standing up for oneself and being able to eat is no choice at all. 

This reality is precisely what keeps workers in line and the train that is capitalism chugging along: Most workers don’t have options. They can’t just walk away. Corporate leaders know this and have used this fundamental power imbalance to perpetuate, and even grow, existing inequities. Round and round we have gone — and still go — for more than two centuries.

But rather than strike against Hollywood studios as WGA/SAG-AFTRA did this year, there may be a better way. We envision writers, actors and anyone else involved in creating the stories that people all over the world take time from their lives to watch, using industry upheaval as an opportunity to wrest back control of these stories —and their livelihoods.

An Old Idea Made New

In past Writers Guild strikes, screenwriters won financial gains but allowed the overarching power structure to remain: Studios and their executives controlled the means and terms of production. Not only did this fail to fundamentally change the power imbalance in the industry, it also marked repeated missed opportunities for bold thinking and innovation. 

Labor unrest, though frightening and painful for all involved, also invariably serves as a moment for potential paradigm disruption. In such moments, screenwriters should consider doing more than just fighting to secure better terms of employment by the Hollywood studios. Instead, writers should create a new studio of their own.

“Is the innate human reluctance to upset paradigms keeping Hollywood workers from going as far as they could — and maybe should — to secure their futures?”

The idea is less far-fetched than it sounds, and there’s a century-old precedent for it. In 1919, United Artists, a film production and distribution company, was incorporated by three leading actors — Mary Pickford, her husband Douglas Fairbanks and Charlie Chaplin — as well as the director D.W. Griffith.

Their joint statement announcing the venture could have been written last week: “This step is positively and absolutely necessary to protect the great motion picture public from threatening combinations and trusts that would force upon them mediocre productions and machine-made entertainment.”

A more modern variation of the UA model is found in Blumhouse Productions, which was founded in 2000 and pays creators a union-minimum fee upfront but gives them a larger and more transparent cut of any profits. Just last year, Ben Affleck and Matt Damon established Artists Equity, a production company in the same vein, to reduce upfront costs and share profits with writers and actors but also costume designers, cinematographers, editors and other workers who contribute to the making of a film.

These are all improvements on the historic status quo, of course. And yet, more could be done. There is a tremendous difference between sharing a portion of profits with creators versus sharing ownership of the work itself. A company owned by a few individuals can decide to be less generous with artists or sell out altogether to larger interests or corporations; for example, United Artists is now owned by Amazon. By contrast, distributing most or all of a studio’s equity among thousands of artists effectively prevents the studio from abandoning its founding purpose. Ultimately, screenwriters would have to decide how to distribute these shares among themselves, including whether to reserve shares for future generations of writers. 

Envisioning A Writer-Owned Studio

Screenwriters could expeditiously create a studio. As for sets and equipment: Soundstages can be rented. Screenwriters would use their expertise to step into the gatekeeping role of selecting what to produce. With writers solely in charge and incentivized as part owners to focus on high-quality projects, content would also likely improve. While one chef might not be the best judge of her creations, no one is better positioned to know the best food in the world than a group of the world’s most celebrated chefs. The same would apply to a studio owned by screenwriters.  

Although existing studios have name recognition, that matters far less in the age of the influencer where TikTok stars show how easily and rapidly word of mouth and notoriety can be established. Plus, if screenwriting giants back such an endeavor, it would instantly imbue the new studio with gravitas.

While the biggest hurdle to such a venture would likely be access to capital, the money to fund a next-generation studio start-up could come from numerous sources. For example, Mozilla Foundation is a nonprofit that successfully competes against giants with its Firefox web browser, which is partially funded by the for-profit Mozilla Corporation’s roughly $500 million annual revenue.

FC Bayern München AG, a European soccer club powerhouse, is 75% fan-controlled with Adidas, Allianz, and Audi each owning an 8.33% stake. OpenAI was started as a nonprofit and then established a for-profit subsidiary that raised more than 11 billion to fund its intensive computing needs.

Or how about a more workaday example? Publix Super Markets, a Florida-based chain, is owned by 80% of its employees (and former employees). And this is no socialist charity: Publix consistently outperforms other supermarkets, like Kroger’s, with a more traditional ownership structure, putting up wider net margins despite only making one-third of Kroger’s sales. Meanwhile, REI, an outdoor gear and clothing maker, is organized as a member-owned co-operative.

Studio executives will note that some of them have recently struggled, partly due to changes in the business brought on by streaming and the more recent Covid pandemic, but also due to their executive greed. The WGA is asking for writers collectively to be paid roughly $429 million more a year, in an industry that made $28 billion in operating profit and paid eight major CEOs a combined $773 million in 2021.

A next-generation studio for writers could incorporate checks on such greed in its governing documents and principles. For example, there could be limits on how much the new studio’s executives could make relative to the lowest-paid writer-owner. The new studio could also implement something like a parity pill as pro forma for executive hiring contracts, where executives agree to automatic pay cuts during times of economic downturn and before writers were asked to reduce any payouts, even temporarily.

“While creating a studio won’t do anything to curb inflation, lower rent or reduce mortgage interest rates, it will provide writers with steadier work and increase the percentage of revenue that goes to them.”

If outside investors were sought to help fund the new studio, there could be a cap on the potential profits these investors could realize, possibly based upon some multiple of their initial investment, with the remainder of those profits returning to other shareholders.

Benefits For Writers And Society

There are several benefits to writers creating their own studio.

First and foremost, will help stabilize employment for writers. At a time when jobs in screenwriting have become more tenuous, shorter-term and less lucrative, the cost of living in cities like New York and Los Angeles has skyrocketed. While creating a studio won’t do anything to curb inflation, lower rent or reduce mortgage interest rates, it will provide writers with steadier work and increase the percentage of revenue that goes to them. Screenwriters would have more control over how much they create and how many writers they hire.

Establishing another studio increases competition for those that already exist and increases bargaining power for writers in future negotiations for better compensation and greater job stability; that’s in addition to the aforementioned profits they would reap. Others, such as actors and behind-the-scenes artists, might also indirectly benefit from improved employment terms over time. 

Second, a writer-controlled platform will enable more permanent protections against the harmful use of AI in Hollywood, while not completely banning its use as a tool. Studio executives see AI as a cost-cutting bonanza and a means for pushing out content that is optimized to attract the most eyeballs (whether or not that would actually bear out remains to be seen, but color us skeptical).

Despite the studios agreeing to certain protections around the use of AI in negotiations this fall, the lack of anything ironclad and long-term means that the can will be kicked down the road for future labor disputes. Eventually, as technology continues to evolve, studios may find ways to use AI to generate intellectually derivative scripts, hiring writers only for revisions, if at all. This will not only lead to job losses and lower compensation but also a stagnation of culture.

The advantage generative AI has over human writers is its ability to produce massive amounts of content in mere seconds (we use the word “content” deliberately here, as distinguished from “stories,” which is what human writers create). But AI’s distinct and insoluble disadvantage is that it does not feel. In fact, it doesn’t even understand language, but rather uses math to determine which words to utilize, and in what order.

It follows, then, that until someone discovers an equation that solves for love, joy or hatred, we will need human writers for stories that delight and move us. Anything produced by AI in the future will be like one of those TV fireplace loops: It may look like a fire, but you better not count on it to keep you warm.

Third, and relatedly, a screenwriter-owned studio could improve the quality of television shows and films. Some critics have averred that the quality of films and television has recently diminished as Amazon, Apple and other streaming services spread writing talent too thin. But there is no shortage of writers capable of making great shows. The slide in quality is actually due to studios rushing production schedules, creating shorter seasons and employing dramatically fewer writers per show — often not keeping them on staff through the entire production run to save costs and increase efficiency.

Creating a studio would of course be an immense, difficult, complex and risky undertaking. It would not be a panacea for all the existing problems of Hollywood as an industry, and realistically, the effort could fail and result in repercussions for those who supported it. Even if it did succeed, the studio would only be one small organism in an otherwise unchanged ecosystem of reflexive exploitation and corporate greed. 

On the other hand, the last few decades have seen widening economic inequality, with an ever-greater percentage of national income going to corporate owners instead of laborers, and ever-larger gaps in compensation among laborers as well. CEO pay has massively outstripped pay to rank-and-file workers, to the point that even mentioning such a widely-known fact feels like mentioning that water is wet or the sky is blue.

It’s not a coincidence that union support is growing — people have seen such inequity and its residual impacts play out in their own lives. When unions were being decimated across other industries, WGA strikes were a model for how others could fight back against unbounded greed. Today’s screenwriters can lead again by helping us all write a better script for equitable labor action in the face of tomorrow’s ever-changing labor landscape.

Update: On Sept.27, 2023, this essay was updated to account for the tentative labor agreement reached between the screenwriters’ union and Hollywood studios. Most substantively, it notes that any negotiated provisions are not permanent and longterm, only lasting as long as it is approved and remains in place.

The post The Next Netflix Should Be Owned By Screenwriters appeared first on NOEMA.

]]>
The Bumpy Road Toward Global AI Governance https://www.noemamag.com/the-bumpy-road-toward-global-ai-governance Thu, 14 Sep 2023 18:17:57 +0000 https://www.noemamag.com/the-bumpy-road-toward-global-ai-governance The post The Bumpy Road Toward Global AI Governance appeared first on NOEMA.

]]>
Just about two and a half years ago, artificial intelligence researchers from Peking University in Beijing, the Beijing Academy of Artificial Intelligence and the University of Cambridge released a fairly remarkable paper about cross-cultural cooperation on AI ethics that received surprisingly little attention beyond the insular world of academics who follow such things. Coming to a global agreement on how to regulate AI, the paper argues, is not just urgently necessary, but notably achievable. 

Commentaries on the barriers to global collaboration on AI governance often foreground tensions and follow the assumption that “Eastern” and “Western” philosophical traditions are fundamentally in conflict. The paper, also published in Chinese, takes the unconventional stance that many of these barriers may be shallower than they appear. “There is reason to be optimistic,” according to the authors, since “misunderstandings between cultures and regions play a more important role in undermining cross-cultural trust, relative to fundamental disagreements, than is often supposed.” 

The narrative of a U.S.-China “AI arms race” sounded jingoistic and paranoid just a few years ago. Today, it is becoming institutionalized and borne out in policy in both countries, even as there has been growing recognition among researchers, entrepreneurs, policymakers and the wider public that this unpredictable, fast-growing and multiuse set of technologies needs to be regulated — and that any effective attempt to do so would necessarily be global in scope. 

So far, a range of public bodies, civil society organizations and industry groups have come forward with regulatory frameworks that they hope the whole world might agree on. Some gained traction but none have created anything like an enforceable global settlement. It seems possible that rivalry and suspicion between two great powers and their allies could derail any attempt at consensus. 

Possible — but not inevitable. 

Getting policymakers from China and the U.S. around a table together is just the largest of many hurdles to a global agreement. Europe is likely to play a decisive role in shaping discussions. Though an ideological ally of the U.S., there are significant ideological differences between the U.S. and the EU on strategic aims regarding AI regulation, the former prioritizing innovation and the latter risk minimization. 

More complex still, any global settlement on AI regulation that genuinely aspires to mitigate the negative consequences of this new technology must account for perspectives from regions often underrepresented in global discussions, including Africa, the Caribbean and Latin America. After all, it is overwhelmingly likely that the Global South will shoulder the brunt of the downsides that come with the age of AI, from the exploitative labeling jobs needed to train LLMs to extractive data mining practices. 

“Despite a thaw in the rivalry between Washington and Beijing remaining a distant prospect, there are still opportunities for dialogue, both at multilateral organizations and within epistemic communities.”

A global settlement on AI ethics principles has clear advantages for all, since the effects of a transformational general-use technology will bleed across national and geographical boundaries. It is too far-reaching a tool to be governed on a nation-by-nation basis. Without coordination, we face a splinternet effect, wherein states develop and protect their technological systems to be incompatible with or hostile to others. 

There are immediate dangers of technologists seeking an advantage by releasing new applications without pausing over ethical implications or safety concerns, including in high-risk fields such as nuclear, neuro and biotechnologies. We also face an arms race in the literal sense, with the development of military applications justified by great-power competitions: The principle of “If they’re doing it, we’ve got to do it first.” 

With stakes this high, there is — superficially at least — widespread goodwill to find common ground. Most national strategies claim an ambition to work together on a global consensus for AI governance, including policy documents from the U.S. and China. A paper released by the Chinese government last November called for an “international agreement” on AI ethics and governance frameworks, “while fully respecting the principles and practices of different countries’ AI governance,” and one of the strategic pillars of a Biden administration AI research, development and strategy plan is “international collaboration.” 

There are some prime opportunities to collaborate coming up this year and next, like the numerous AI projects under the U.N.’s leadership and next year’s G7, which Giorgia Meloni, the Italian prime minister and host, suggested would focus on international regulations of artificial intelligence. This July, the U.N. Security Council held its first meeting dedicated to the diplomatic implications of AI, where Secretary-General António Guterres reiterated the need for a global watchdog — something akin to what the International Atomic Energy Agency does for nuclear technology.

Yet the disruptive influence of fraught relations over everything from the war in Ukraine to trade in advanced technologies and materials show no sign of ending. U.S. politicians frequently and explicitly cite Chinese technological advancements as a national threat. In a meeting with Secretary of State Antony Blinken this June, top Chinese diplomat Wang Yi blamed Washington’s “wrong perception” of China as the root of their current tensions and demanded the U.S. stop “suppressing” China’s technological development. 

Which is why the first of four arguments from Seán ÓhÉigeartaigh, Jess Whittlestone, Yang Liu, Yi Zeng and Zhe Liu — that these problems are surmountable and a near-term settlement on international AI law is achievable — is so important. In times of geopolitical tension, academics can often go where politicians can’t. There are precedents for epistemic communities from feuding nations agreeing on shared solutions to mitigate global risks. “You can look back at the Pugwash Conference series during the Cold War,” ÓhÉigeartaigh told me. “There were U.S. and U.S.S.R. scientists sharing perspectives all the way through, even when trust and cooperation at a government level seemed very far away.” 

“Differences in ideas about governing ethics across cultural and national boundaries are far from insurmountable.”

There is evidence that Chinese and U.S. academics working on AI today are keen to cooperate. According to Stanford University’s 2022 AI index report, AI researchers from both countries teamed up on far more published articles than collaborators between any other two nations, though such collaborations have decreased as geopolitical tension between the two countries has increased. Such efforts, meanwhile, took place even amid threats to the lives and livelihoods of Chinese researchers living or visiting the U.S. — in 2018, the Trump administration seriously debated a full ban on student visas for Chinese nationals, and in 2021, according to a survey of nearly 2,000 scientists, more than 42% of those of Chinese descent who were based in the U.S. reported feeling racially profiled by the U.S. government. 

Although technology occupies a different place in Chinese society, where censorship has dominated since the early days, than in the U.S., which is still somewhat aligned with Californian libertarians and techno-utopians, ÓhÉigeartaigh and his colleagues’ second argument is that a these differences aren’t so great that no values are held in common at all. 

Western perceptions of the internet in China are frequently inaccurate, which can make invisible certain points of common ground. Take, for instance, the issue of data privacy. Many in the West assume that the Chinese state, hungry to monitor its citizens, allows corporations free reign to harvest users’ information as they please. But according to China’s Artificial Intelligence Industry Alliance (AIIA), a “pseudo-official” organization that includes top tech firms and research organizations, AI should “adhere to the principles of legality, legitimacy and necessity when collecting and using personal information,” as well as “strengthen technical methods, ensure data security and be on guard against risks such as data leaks.” In 2019, the Chinese government reportedly banned over 100 apps for user data privacy infringements. 

In the U.S., meanwhile, policies on data privacy are a mess of disparate rules and regulations. There is no federal law on privacy that governs data of all types, and much of the data companies collect on civilians isn’t regulated in any way. Only a small handful of states have comprehensive data protection laws.

This brings us to the third reason why a global settlement on AI regulation remains possible. Given the complexities of governing a multi-use technology, AI governance frameworks lean toward philosophical concepts, with similar themes emerging time and again — “human dignity,” “privacy,” “explainability.” These are themes that both countries share. 

As China’s AIIA puts it: “The development of artificial intelligence should ensure fairness and justice … and avoid placing disadvantaged people in an even more unfavorable position.” And the White House’s draft AI Bill of Rights reads, in part, that those creating and deploying AI systems should “take proactive and continuous measures to protect individuals and communities from algorithmic discrimination and to use and design systems in an equitable way.” 

This is not to say that incompatibilities genuinely rooted in divergent philosophical traditions can be wished away, nor that shallow accords are any foundation for lasting agreements. Rather, the point is that there is often scope to agree on specific statements, even while arriving at them from different places — and perhaps even while disagreeing on abstract principles. 

Here again, academia has a valuable role to play. Scholars are working to understand how different ethical traditions shape AI governance and uncover areas where consensus can exist without curtailing culturally divergent views. Sarah Bosscha, a researcher who studies how European and Chinese AI legislation differs, told me that with respect to the EU, the greatest point of divergence is the absence of a parallel to the Confucian value of “harmony” — often interpreted as the moral obligation of an individual to the flourishing of their community. In China, following norms derived from Confucius, a person is not primarily an individual, but a family member, part of a social unit. This order of prioritization may clearly come into conflict with the supremacy in Europe (and even more so in America) of the individual. 

But as Joseph Chan at the University of Hong Kong has argued, these are not mutually exclusive values. Chinese Confucianism, by his reading, can support multiple context-independent points of human rights. And the Universal Declaration of Human Rights contains collectivist aspects that contain similar meanings as the Confucian value of harmony: Human beings “should act towards one another in a spirit of brotherhood” (Article 1) and have “duties to the community” (Article 29).

This overlap is borne out in policy documents, with a 2019 EU document outline principles that emphasize community relations and contain a section on “nondiscrimination” against minorities. According to Bosscha, “the European Union would do well to name ‘harmony’ in its regulations and acknowledge its own investment in this value.” 

The Beijing AI Principles (2019), meanwhile, echo the language of human rights law, stating that “human privacy, dignity, freedom and rights should be sufficiently respected.” Though of course, China’s deployment of AI and surveillance technologies against minorities reveals this commitment is far from full implementation.


A fourth line of reasoning in the paper by ÓhÉigeartaigh and his colleagues is that a noteworthy amount of the mistrust between the West and East is due to a “rich history of misunderstandings.” This is due, at least in part, to an asymmetrical language barrier. Scholars and journalists in China often have a strong command of English, the lingua franca of Western academia, and can access the work of their counterparts. Meanwhile, those working in the West rarely master Chinese languages. As such, knowledge-sharing often only flows one way, with English-speaking scholars and politicians alike almost entirely reliant on translations to access policy documents from China.

Political language is usually nuanced — its subtleties rarely translatable in full. This is especially true in China. Translations of relatively ambiguous statements from Beijing on AI law have caused some high-stakes misunderstandings. For example, a 2017 Chinese AI development plan was largely interpreted by Western commentators as a statement of intent toward technological domination. This was partly thanks to a translation that was worded as a declaration of China becoming “the world’s primary AI innovation center” by 2030. However, according to Fu Ying, a former Chinese diplomat, that was a misreading of the intent of the plan. “What China wants to achieve,” she wrote, “is to become a global innovative center, not ‘the’ only or exclusive center” — clearly a gentler goal. 

But apprehension based on the translation of the Chinese plan reverberated in the American tech community nonetheless. As Eric Schmidt, a former executive chairman of Google parent Alphabet, put it at a summit in 2017: “By 2030, they will dominate the industries of AI. Just stop for a sec. The [Chinese] government said that.” 

“There is already an overlap in AI ethics frameworks between the two nations. And debunkable myths can inflate U.S. fears of China’s technology strategies.”

For ÓhÉigeartaigh, the reason global efforts to create shared regulation on AI are so vulnerable to derailment lies in asking who stands to benefit from crystallizing the narrative of a “U.S.-China tech race” from rhetoric to policy. “If there is a race,” he told me, “it’s between U.S. tech companies. I am concerned that the perspective of ‘needing to stay ahead of China’ is used to justify pushing ahead faster than would be ideal.”

In his view, many technologists are deliberately amplifying U.S.-China “race” rhetoric to justify releasing software as fast as possible, cutting corners on safety checks and ethical considerations. 

Schmidt is the head of the National Security Commission on Artificial Intelligence and a highly influential proponent of the “race” against China viewpoint. For years, Schmidt has pushed the Pentagon to procure smarter software and invest in AI research while maintaining a strong preference for technology deregulation. Meanwhile, his venture capital firm has invested in companies that won multimillion-dollar contracts from federal agencies. 

According to AI Now’s 2023 report, the crux of the problem is that AI products and the businesses behind them are increasingly perceived as national assets. The continued global dominance of America’s Big Tech companies (Google, Apple, Facebook, Amazon and Microsoft) is tied to U.S. economic supremacy. Any attempt to set limits on what those companies can develop or the data they can use risks ceding vital ground to Chinese companies, which are often presumed — falsely — to operate in a regulatory vacuum. 

This argument has proved remarkably influential, particularly with regard to privacy regulations. In 2018, shortly after the Cambridge Analytica scandal, Mark Zuckerberg applied this line of reasoning to warn against strengthening data rights. In particular, he stated at a Senate hearing that implementing certain privacy requirements for facial recognition technology could increase the risk of American companies “fall[ing] behind Chinese competitors.” Just last year, the executive vice president of the U.S. Chamber of Commerce argued that data privacy guidelines outlined within the AI Bill of Rights that intended to bring the U.S. closer to the EU’s GDPR were a bad idea when the U.S. is in a “global race in the development and innovation of artificial intelligence.” Needless to say, conflating deregulation with a competitive edge against China doesn’t bode well for attempts to cooperate with its policymakers to agree on global regulations. 

Fortunately, the U.S. government is not entirely batting on behalf of Big Tech. The Biden administration has taken clear steps to enforce competition with anti-trust laws — against the wishes of tech monopolists. A 2021 executive order declared that “The answer to the rising power of foreign monopolies and cartels is not the tolerance of domestic monopolization, but rather the promotion of competition and innovation by firms small and large, at home and worldwide.” 

So, despite a thaw in the rivalry between Washington and Beijing remaining a distant prospect, there are still opportunities for dialogue, both at multilateral organizations and within epistemic communities. As academics have shown, differences in ideas about governing ethics across cultural and national boundaries are far from insurmountable. There is already an overlap in AI ethics frameworks between the two nations. But unfortunately, durable myths continue to inflate U.S. fears of China’s technology strategies. 

Though the path to agreeing on a set of global ethical guidelines between rivals may be a bumpy road, there’s nothing inevitable about the future direction this technological rivalry will take.

The post The Bumpy Road Toward Global AI Governance appeared first on NOEMA.

]]>
Tomorrow’s Democracy Is Open Source https://www.noemamag.com/tomorrows-democracy-is-open-source Tue, 12 Sep 2023 17:35:13 +0000 https://www.noemamag.com/tomorrows-democracy-is-open-source The post Tomorrow’s Democracy Is Open Source appeared first on NOEMA.

]]>
Much time today is spent discussing the threats technology poses to democracy rather than the opportunities. It’s not hard to discern why: Social media has created new vectors for bad actors to spread disinformation that rattles around uncriticized within echo chambers; algorithms increasingly grab our attention by focusing on the most extreme messages; and AI threatens to supercharge both of these phenomena with a dash of embedded bias. These problems have provoked numerous, yet so far largely fruitless, calls for regulation of these technologies.

To be sure, technology needs more regulation. But instead of treating technology only as degrading democracy and civic participation, we can also cultivate it as a positive tool that can enhance democracy. Imagine, for example, that instead of just voting every two or four years, or even participating in the occasional citizens’ assembly or serving in government via sortition — effectively a form of jury duty for policy-making — that every constituent was able to express their political opinions on every public policy topic, continuously, and could do so based on good information? Such a system wouldn’t be just a tweak to the existing democratic practices or a return to time-honored methods; rather, it would represent a revolution in the very nature of democracy itself.

That such a renewal is needed is scarcely in doubt. The anger that suffuses so much politics today is directly connected to the sense among constituents that their governments are not responsive to their concerns. Many democratic theorists and reformers today talk about finding new ways to encourage political “participation without populism.”

The need to improve the responsiveness of our democratic systems of government could scarcely be more urgent given the massive policy-making challenges —  from greening our energy systems to managing the integration of AI into our economies — that we will face in the coming decades. A technologically enabled form of continuous democratic engagement offers the promise of a government that is simultaneously more effective, more efficient and more directly responsive to the will of the public.

The Wilde Conundrum

“The trouble with socialism,” Oscar Wilde is often said to have remarked, “is that it takes too many evenings.” What Wilde meant was that any form of participatory governance requires people to invest free time they might use for more personally urgent or entertaining purposes than digging into the details of local zoning laws, public school administration or utility regulation. For many citizens, the prospect of being expected to be continuously well-informed and engaged in government decision-making sounds a bit like a mandate to exercise more and eat less. Sure, it might be good for you, but it’s not much fun.

Since the dawn of mass-franchise democracy, policymakers and politicians have struggled with how to engage constituents who don’t find politics a worthwhile use of their time. When the political scientist Samuel Popkin popularized the phrase “low information” voter in 1991, he underscored that for most voters, making “low information” decisions about who to vote for was ultimately “rational,” since the amount of personal effort required to understand the nuances between different candidates or policy choices, multiplied by the low likelihood that this effort would make a significant difference in one’s opinion, was simply not a good use of one’s time. It was more worthwhile, Popkin argued, for voters to rely on “heuristics,” or their own more instinctive judgments.

For example, rather than invest time and energy trying to differentiate politicians’ policies, voters could simply observe Bill Clinton’s penchant for McDonald’s and know he was a man of the people compared to the patrician George H. W. Bush, who reportedly did not know what a barcode was. At the limit of this logic, economist Anthony Downs noted in 1957 that it is “irrational” to vote at all, given the vanishingly low likelihood that your vote will decisively impact the outcome. In other words, what these social scientists are observing is that the amount of effort a voter needs to put in, in order to make a difference in democratic practice, is often much larger than the likelihood that any such effort will bear fruit.

Poor democratic engagement isn’t just about low-energy or low-information voters. It’s also a result of low-information political representatives: politicians who, even when they’re in good faith trying to represent the interests of their constituents, simply don’t know and have limited ways of finding out what their constituents really want.

“The need to improve the responsiveness of our democratic systems of government is especially urgent given the massive policy-making challenges — from greening our energy systems to managing the integration of AI into our economies — we will face in the coming decades.”

In particular, politicians of mesoscale jurisdictions — for example, cities with 50,000-500,000 people — often represent more people than they can possibly meet and know in person, but typically lack the resources to do formal polling.In Santa Monica, California, for example, each city councilmember represents around 13,000 constituents and makes roughly $21,000 a year, about the median for city councilmembers across Los Angeles County’s 88 jurisdictionally distinct cities.

In cities of this size, there are rarely resources for polling the public policy questions that come before the city council. So councilmembers rely on heuristics — like the clamoring of people at city council or school board meetings — to determine the preferences of their constituents, even though everyone knows how unrepresentative these meetings usually are. As a result, politicians in these mesoscale jurisdictions are typically almost as ignorant of their constituents’ true aggregate preferences as the constituents are of the stakes (let alone the details) of the policy questions under debate.

Here is where technology can help.

From Policy Code To (Iterated) Software Code

Perhaps the biggest hurdle to evolving the relationship between constituents and active government is the sheer magnitude of continuous information it generates. Overwrought legal language, byzantine proceduralism, and frustratingly inaccessible documentation all render participation nearly impossible for most working people.

This very complexity, however, represents an opportunity for technology to provide tools and platforms to transform the nature of the democratic process. Through summary synthesis, data visualization and simulation of legislative consequences, we can build a system of political engagement that provides real-time interrogable, explorable models of the inputs and outputs that make up the legislative decision-making process. We can use software code to enable new forms of participation in the creation of our legal code. 

In many ways, the process of generating civil code is similar to that of generating software code. Some of the tools that software engineering has evolved to help developers understand and contribute to giant fast-evolving codebases might now be adapted for the domain of legislative ordinances and the generation of civil code. For instance, a technique called continuous integration provides an orchestrated set of roles and intuitive participatory interfaces that enable multiple software developers to contribute simultaneously to a fast-evolving code base.

Such a technique can quickly resolve conflicts and merge different versions of a functional document together quickly and seamlessly. An analogous type of legislative adaptivity will be necessary as society confronts rapid discontinuities in the coming years, from climate instability to epidemics to the use of disruptive technologies themselves. To steer between the Scylla of authoritarianism and the Charybdis of decision-making sclerosis we will need to remake our political processes so that they can dynamically integrate public opinions in a rapid decision-making context.

Some practices in the legislative system could benefit from simple techniques included in the software development process. In municipal bodies, “codification” involves translating bills and ordinances voted on by the city council into changes in the civil code. Such changes to the law can be seen as, in effect, updates to civil society’s “operating system,” but are frequently opaque and difficult to track down.

In a continuously integrated code system, however, you could trace any passage to its origin “change order” — or vote, in this case — and discover the circumstances around how the piece of code came to be. Imagine a process for rich hypertext referentiality and graph visualization of the areas within the corpus of civil code that each ordinance or bill passage impacts, that allows constituents to understand how, for instance, a section of the building code came to be and the circumstances that led up to any changes.

A crucial component of a democratic process of civil code development is that citizens get an opportunity to weigh in before a vote. In the United States, this general idea is so important that it is enshrined in the First Amendment to the U.S. Constitution under the right of citizens to petition the government.

“A technologically-enabled form of continuous democratic engagement offers the promise of government that is simultaneously more effective, more efficient and more directly responsive to the will of the public.”

In state and local government, this same principle is embodied in analog form when there are legally required “public comment” periods for proposed legislation. To date, however, there have been no serious efforts to automate the integration of constituent opinions into the civil code production process. But with large language models (LLMs), or artificial intelligence able to process mass amounts of data, this may be about to change radically.

Introducing ‘Open Insight’

To enable experimentation with dynamic representation of constituent preferences in a more real-time, legislative decision-making process, the Berggruen Institute is prototyping a set of open digital tools to help legislators and constituents understand civil issues and communicate more effectively. We’re calling this proposed platform “Open Insight,” and are designing it especially for use by the sorts of mesoscale municipal governments perhaps most in immediate need of such software-enabled constituent participation mechanisms.

The first part of this application toolset we will prototype is about aggregating and making sense of the history and future of decisions that a legislative body in a municipality has made or will make. City councils frequently publish meeting notes online. The tools we are developing create graph databases of this deliberative and legislative timeline by using machine learning to review such “meeting minutes” and to construct an ontology of the activities and subjects across the past and future of any changes to the civil code, allowing us to create an interconnected map of the decisions over the course of the municipality’s history.

The concept is to create a richly explorable, interconnected map of political action committee activities, the debates and decisions of policy-makers, the press around these decisions, as well as the public’s response in various fora prior to and after any changes. Additionally, and perhaps most importantly, the tools will make it easy to see each ordinance or law that passed, how it changed the civil code, and thus to understand how it has operationally changed the way the city is run. For each paragraph of the civil code, there will be a history of how it came to be when it was modified, by whom and what the context of the vote was at the time.

For example, if a decision about what the city should do with a newly acquired piece of land was coming before the city council, the application might present and synthesize to users the land use history of the specific site, the various proposals that had been laid out in the media, and what had been decided about similar parcels in nearby jurisdictions. The relevant context would need to be determined on a case-by-case basis, likely by civil servants, perhaps with assistance from LLMs.

An example of what such an open-source publicly available app might look like. Matt Irvine Brown for Noema Magazine

These interconnected graphs of legislative activity can also serve as input vectors for LLMs for the continuous transformation of any legislative code. LLMs like ChatGPT, Bard and others, could potentially be made available as companion bots to help explore the legal and cultural context of past decisions and upcoming votes, and perhaps even help constituents draft new bills and ordinances based on precedent.

Central to the inquiry of our prototyping is how the function and provenance of these tools can be made transparent and how they can be tested for accuracy, bias and misinformation. Solving these problems is perhaps the largest hurdle to the potential uptake of these tools as services offered by public bodies themselves.

The second part of our prototype stack uses this data graph to present a new type of continuous engagement experience for constituents. Our initial goal here is to create a lightweight and accessible mobile app for residents of a municipality that enables them to see what has been and will be voted on by the legislature, to track what they care about and signal their preferences on each issue in an aggregated and anonymous way to their political representative.

“Our initial goal here is to create a lightweight and accessible mobile app for residents of a municipality to see what has been and will be voted on by the legislature, to track what they care about and signal their preferences on each issue in an aggregated and anonymous way to their political representative.”

We hope to iterate on this experience as fast as possible and make it highly configurable to suit each unique political context and allow for experimentation with techniques like data visualization and machine learning. We are currently open to conversations with municipal legislators who want to participate in this collaborative prototyping and feedback process.

The semantic graph underlying the decision history tools allow the constituent to explore the timeline of past and future voting issues on ordinances 2744 and 2745, for example, by topic and to filter by specific political actor choices. Matt Irvine Brown for Noema Magazine

As a constituent exploring the issues, you will see how your councilmember voted on prior ordinances and can also signal your own position on any proposal, which will be aggregated into an anonymized data visualization that the councilmember can use as community input. There will need to be public discussions about whether the results should be sortable based on demographic data about the constituents, or if such data should be collected to begin with.

Matt Irvine Brown for Noema Magazine

On the one hand, though the current system is not very representative as it is, without such data from the app, politicians might doubt the representativeness of the expressed opinions; on the other hand, such demographic data, if sufficiently granular, might compromise the anonymity of the constituents. Likewise, different communities will have different preferences about whether the results should be made available only to the politicians or to the public at large.

This is a potential feedback dashboard that legislators could consult, which graphs constituent input. Matt Irvine Brown for Noema Magazine

A New Model For Democracy

Over time, we can imagine Open Insight might become good enough at learning about constituent preferences that it could provide recommendations to them about how they might feel about issues being debated in their city council. This vision opens the wild possibility of each of us having a “personal political avatar,” that is, a continuously updated digital representation of our political preferences, based on our formally expressed and perhaps now implicitly understood political opinions.

This political avatar might “advise” you as to your likely thoughts regarding an upcoming vote on a matter before your city council and also explain to you why this is. Of course, upon seeing its rationale, you might decide that in fact this is not your opinion on a particular matter of public policy and choose to express the opposite opinion. Your political avatar would then update its understanding of your preferences. Over time, your avatar would learn more about you, and its ability to anticipate your opinions would become increasingly accurate and thus labor-saving.

Needless to say, this last vision of a radically new form of digitally enabled democratic participation raises a series of potentially alarming new challenges for how we need to regulate our democratic practices, including questions that are simultaneously technical and ethical. Who will control and vet this codebase? Should the preferences expressed by constituents through the app be made public? What sort of obligation should they impose on the reelected representatives who are ultimately responsible for the decisions? How do we ensure the privacy of those expressing their preferences through the Open Insight app?

The privacy of the formal ballot in voting is a venerable tradition in most democracies — how does that translate into online expression or pre-voting, during the legislative process? Who will ensure both the accuracy of the anonymized data and that it isn’t breached or altered in some way? These are all questions that are central to any form of democratic practice, but which will only be intensified in the technologically enhanced form of democracy we are imagining here.

Likewise, if the app includes a recommendation engine that is continuously learning each constituent’s specific political and policy preferences, how do we ensure that the algorithm is itself transparent and that users do not come to over-rely on these recommendations, giving up agency to the app even as they regain it from their representatives? Should the constituents’ “votes” be binding on policymakers or simply serve as recommendations for them to consider?

To begin with, we believe that there should be recommendations for elected officials to be able to gauge public opinion, but if the app becomes transparently governed and effective enough, possibilities for political disintermediation might arise, such that technology could enable a new form of direct democracy.

“Built and deployed as open source public code, an app like Open Insight can be designed to ensure both transparency and guardrails against the worst outcomes.”

Such an app, if deployed at scale, might change democracy in more profound ways. Imagine, for example, if the app achieved the following three adoption milestones: (a) 100% adoption by constituents in a given jurisdiction; (b) that constituents are so satisfied with how well their “personal political avatar” represents their interests that they automatically hit “approve” on all recommendations; and (c) political representatives become so confident that these expressions of opinion accurately represent the Rousseauvian “general will” that they automatically defer to them.

Taken together, you end up with a political decision-making process that is “radically democratic” (in the sense of being responsive to the general will) but paradoxically also one in which humans have been completely removed from the decision-loop of the policy-making process. In this scenario, policymakers would merely propose laws, submit them for evaluation by their stakeholders’ personal political avatars and then implement what gets approved. In other words: an automated form of direct democracy.

Whether you consider such an endgame a dream or a nightmare, however, is not a good argument to avoid building such an application. Rather, it is an argument in favor of building it and deploying it as public code. It is next to inevitable that something like this will eventually be built, but the choice, societally, is whether it be built and owned by a private vendor whose incentives will likely be more profit-driven and proprietary in nature, or whether it is ultimately owned by all of us.

Built and deployed as open-source public code, an app like Open Insight can be designed to ensure both transparency and guardrails against the worst outcomes. What might such guardrails look like? It might be requiring users to review relevant materials before expressing their opinions, listening to both proponents and opponents of a particular piece of legislation, or coupling such software-powered harvesting of opinions to other participatory democratic reforms such as sortition and citizens’ assemblies.

Part of the reason we are looking to engage with software domains that address the core functions of democracy is to underscore the importance of literacy in and governance of such potentially transformative tools, which are now being generally referred to as digital public infrastructure.

We believe that software that serves the public interest deployed as a public service — in the way that Open Insight might be — should be conceived of as public code, an approach to the production of software that is not only situated in traditional open source principles, but further mandates explainability, good governance, sustainability, and accountability, among other criteria spelled out in the Standard for Public Code.

This standard, and an accompanying approach to collaborative codebase stewardship, have been developed by the Foundation for Public Code, a Netherlands-based, nonprofit association that helps public administrations around the world build open digital infrastructure together.

International collaboration on the development of such large-scale open Digital Public Infrastructure has already yielded projects like MOSIP, India’s digital identity service that gives nearly one hundred million registered users access to government services, and LEOS, a project of the European Commission to provide collaborative online editing of proposed legislation that is used in the European Parliamentary process, as well as among member states.

The Digital Public Goods Alliance, which is endorsed by the United Nations, identifies a whole range of open digital tools and platforms that can be deployed by cities and states to advance sustainable development goals, like operating school systems or building out transit networks.

As we develop prototype projects like Open Insight, we hope to eventually convene an ever-expanding network of cities and states working collaboratively to test and build highly functional systems and tooling. This ecosystem of open distributed procurement will create normative standards for processes like those of continuous political participation. An institution that wants to offer a Digital Public Infrastructure solution can produce a “reference implementation” of a proposed tool or system — that is, a program that implements all requirements from a corresponding specification and thereby serves as a model for others.

This initial implementation can then be iteratively prototyped in open collaboration with implementing partners. If successful, this solution can become public code, with an open license, a collaboratively generated governance model and a technical roadmap of potential improvements that is continually implemented, maintained and evolved by a thriving community of public organizations.

“Technological tools like continuous political participation may help us become more responsive to fundamental societal questions as a culture, without compromising the core values that define our civilization.”

Moving from a democratic system where voting occurs once every couple of years after which decision-making is completely delegated to the elected officials — essentially our current model — to one based on continuous technologically enabled engagement with the nuts and bolts of government decision-making, represents a radical revision to our standard assumptions not just about how democracy works but about what democracy even is.

The model of democracy envisioned here involves going far beyond the vision of democracy promoted, for example, by the Open Society Foundations (OSF), that is, a vision of democracy centered on political parties competing in free and fair elections, with peaceful transitions of power from one set of elected representatives to another. That OSF version of democracy is of course vitally important, and the vision we propose here does not displace it, but rather supplements and enriches the traditional model of representative electoral democracy with a technologically enabled system for the continuous expression of political sentiments and opinions.

As we head deeper into the intensely nonlinear 21st century, with its rapid onset of epidemics and climate catastrophes, it is clear that human society and governance must become adaptive at a higher rate than we can currently achieve. We must begin to understand the possibilities of technological tools like continuous political participation to explore how to become more responsive to fundamental societal questions as a culture, without losing our integrity to the core values that define our civilization.

How can we give the Oscar Wildes of today back their evenings while keeping our institutions and planet intact? We can do this by using the process of prototyping participatory software itself as a site for reimagining practices of democratic deliberation. In this way, software design methods based on iterative experimentation, collaborative understanding, and continuous integration of multiple points of view can serve as a model for a dynamic new concept of democracy.

The post Tomorrow’s Democracy Is Open Source appeared first on NOEMA.

]]>
What AI Teaches Us About Good Writing https://www.noemamag.com/what-ai-teaches-us-about-good-writing Tue, 25 Jul 2023 15:44:25 +0000 https://www.noemamag.com/what-ai-teaches-us-about-good-writing The post What AI Teaches Us About Good Writing appeared first on NOEMA.

]]>
As soon as I sit down to write, I feel compelled to scrub my bathtub and reorganize my filing cabinet — the most tedious chores suddenly become more appealing than the task at hand. Writing can feel so daunting that we’ve invented the term writer’s block to describe the unique sensation of its challenge, and we debate whether the ability to write well is learned or simply innate. The work requires long stretches of intense focus and undivided attention, and doing it well usually involves a prolonged process of revision. For many of us, writing feels like one of the most burdensome activities we can do.

Generative AI tools like ChatGPT offer the seductive possibility that we can optimize this laborious process. But while it can clearly optimize the time and effort of writing, ChatGPT cannot necessarily optimize writing quality. The program produces highly competent prose that usually passes as human-generated, but so far, the quality of its writing — beyond the novelty of being authored by an algorithm — is mostly unremarkable. 

At the University of California, Los Angeles, where I teach writing, the common sentiment among faculty is: “Sure, ChatGPT can write — but it can’t write well.” Some professors caution students against using the tool by appealing to their egos: “You could use AI to cheat on your essay, but do you really want a C+?” 

Others, recognizing that AI tools will characterize the working world into which students will graduate, are beginning to allow their use in constrained ways, framing them as automated writing tutors or advanced grammar-checking tools. But even AI enthusiasts tend to advise students to maintain authorial control by editing any AI-generated output for accuracy, style and sophistication. 

The flat, conventional feel that characterizes most AI-generated writing stems from the predictive nature of the algorithm. Trained on vast databases of human texts, from books to articles to internet content, programs such as ChatGPT, Bard, Bing, and Claude function like sophisticated autocomplete tools, identifying and predicting phrase patterns, which makes their output feel somewhat predictable, too. 

But does predictable writing necessarily mean bad writing? When we talk about good writing, what exactly do we mean? As we explore new applications for large language models and consider how well they can optimize our communication, AI challenges us to reflect on the qualities we truly value in our prose. How do we measure the caliber of writing, and how well does AI perform?

In school, we learn that good writing is clear, concise and grammatically correct — but surely, it has other qualities, too. Perhaps the best writing also innovates in form and content; or perhaps it evokes an emotional response in its readers; or maybe it employs virtuosic syntax and sophisticated diction. Perhaps good writing just has an ineffable spark, an aliveness, a know-it-when-you-see-it quality. Or maybe good writing projects a strong sense of voice. 

But then, what makes a strong voice, and why does ChatGPT’s voice so often fall flat?

The Value Of Human Error

“The Elements of Style,” the classic reference book on writing by William Strunk Jr. and E.B. White, lays out a series of concrete rules. To write well, the authors say, you should abide by certain conventions, such as grouping your sentences into single-topic paragraphs. You should adhere to certain grammatical rules, like, “Do not join independent clauses by a comma.” You should “omit needless words” and write in an efficient, organized, streamlined manner. 

These rules take effort for any human writer — we all miss the occasional comma splice, use a few more words than necessary or bury our main point in the middle of a paragraph. ChatGPT, by comparison, rarely makes rhetorical moves that stray from Strunk and White’s conventions unless instructed to do so, and the speed with which it spews forth efficient, grammatically correct sentences is impressive, unsettling and perhaps mildly humiliating to us error-prone human writers. For teachers trying to catch cheating students, the total absence of typos and grammatical flubs is often what raises suspicions. 

We seem to tolerate and even expect a certain amount of idiosyncrasy in our writing, and the conventions themselves can be murky and variable— the Oxford comma, for instance, maintains a devoted cult of enthusiasts even while some style guides discourage its use, and languages like African American Vernacular English have their own coherent grammatical structures that differ from those of so-called standard American English. Conventions can also evolve over time — we now commonly treat “they” as a singular pronoun when a short time ago it was exclusively plural.

“As we explore new applications for large language models and consider how well they can optimize our communication, AI challenges us to reflect on the qualities we truly value in our prose.”

Writing that consistently adheres to convention is effective because its predictability makes it easy to read. If you expect to find the main point of a paragraph in its opening, you can read faster than if you had to spend time hunting for it.

But simply abiding by the rules doesn’t make excellent writing — it makes conventional, unremarkable writing, the kind usually found in business reports, policy memos and research articles. In his review of AI-generated novel “Death of an Author,” Dwight Garner describes the prose as having “the crabwise gait of a Wikipedia entry.” Even when a user prompts ChatGPT to include specific grammatical errors or to stray from certain norms, its writing tends to carry a certain flatness. By design, the program relapses to a rhetorical median, its deviations mechanical whereas ours are organic.  

That’s not to say that convention flattens prose. In fact, convention lies at the root of much of the best writing — it’s rare to see acclaimed texts that stray dramatically from grammatical and stylistic norms. 

Structural convention also underlies much of what we call good writing. Most prize-winning literature innovates within classic story arcs: Aristotle’s three-act structure (beginning, middle and end); Freytag’s five-stage structure (exposition, rising action, climax, falling action and resolution); or a screenwriter’s six categories of dramatic conflict (conflict with self; with others; with society; with nature; with the supernatural; and with the machine).

Indeed, the fact that AI, which is trained to detect and replicate underlying patterns in our writing, can produce such coherent prose is a testament to just how much we rely on convention, both at the sentence and structural level. 

Experimental works that abandon traditional narrative structures can be critically successful but tend not to receive popular readership — for instance, Jon Fosse’s “Septology, a seven-volume, 800-plus page novel written as a single sentence, was a finalist last year for the International Booker Prize and the National Book Award but has only a few reviews to date on Amazon. We tend to admire writing whose predominant feature is experimentalism, but readability seems to matter to us, too — how great can a piece of writing be if few have the capacity or inclination to finish it?  

Using formulaic elements to produce language doesn’t necessarily mean the writing will sound formulaic — in fact, constraints can be useful creative tools. The Oulipo poets use formulas to innovate in their writing, applying algorithms like “snowball” (increasing each line length by one word), “lipogram” (avoiding words that use specifics letters), and “n+7” (replacing every noun in a text with the noun appearing seven entries away in the dictionary) to generate writing that often feels surprising and fresh. 

Oddly, ChatGPT is not very good at producing writing under Oulipian constraints — it failed to generate correct responses to all of these formulas. In response to my prompt, “Write a sentence that doesn’t use any words containing the letter “E,” it wrote:

Sure! Here’s a sentence without the letter “E”:

“The big brown dog ran swiftly through the grassy field.” 

In this sentence, none of the words contain the letter “e.”

The overly enthusiastic “Sure!” and the affirmatory closing line make this sample feel like it was written by a mischievous child hoping the reader won’t notice that three out of the ten words in the sentence use “E”. Whatever limited sense of spark this passage has can be attributed to the AI’s failure to adhere to the constraint, to the human-like energy that comes from its error.

The student essay is another form of constraint. To earn a passing grade, writers must conform to its rules, but to excel they must innovate within and beyond them. Students must use specific citation methods, adhere to conventions of academic writing and ensure their responses fully address a prompt’s question; at the same time, they must offer sufficiently interesting variations on the theme, evidence of original thought and dynamic phrasing that commands attention — a delicate balance most do not consistently achieve. 

Occasionally, students surprise me with an unexpected idea or turn of phrase, but more often I’m struck by the similarity of their ideas and voices. And, of course, their writing is similar: They learn a standard high school curriculum. They’re mostly the same age, at the same place in life, with similar life experiences and challenges. They’ve been programmed with the same data and they’re responding to the same prompts.

“Even when a user prompts ChatGPT to include specific grammatical errors or to stray from certain norms, its writing tends to carry a certain flatness.”

Reading a batch of undergraduate essays is not unlike commanding ChatGPT to “regenerate” its response to the same prompt in new words — the program is eerily skilled at saying the same thing in countless new ways. Like students, it can meet the basic requirements of generic essay prompts but struggles to innovate beyond them; to walk the line between predictability and surprise; to keep one foot inside the box and the other outside it; to move from a C+ to an A+.

Broadly, good writing seems to require a balance of conformity and nonconformity, and at times, deliberate rule-breaking. A certain amount of Strunk and White-style convention is necessary to convey ideas coherently — after all, language is a shared system of meaning, and straying too far from its rules risks losing connection with readers or becoming unreadable altogether. 

But good writers recognize that grammatical rules are dictated by problematic power structures and are not independent measures of correctness. Good writers assess the rhetorical context for their writing and make deliberate decisions about where to conform and where to stray. 

The opening line of Charles Dickens’ “A Tale of Two Cities” is a horrifically long, repetitive sentence, but deliberately so:

It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair, we had everything before us, we had nothing before us, we were all going direct to Heaven, we were all going direct the other way — in short, the period was so far like the present period, that some of its noisiest authorities insisted on its being received, for good or for evil, in the superlative degree of comparison only.

The unwieldy length and repetitive nature of the sentence is its point. Its cascade of contradictory clauses violates most basic writing advice, and its flagrant comma splices override Strunk and White’s rules. But these moves mirror the meaning of the sentence and establish rhythmic momentum, allowing Dickens to characterize the period as a chaotic, contradictory, overflowing moment through both the phrasing of the language and its meaning. 

Literature and writing courses teach the art of “close reading,” or looking to sentence-level elements like punctuation, diction and syntax, for insight into the deeper meaning of the text. Perhaps a key criterion of good writing is that which rewards close reading with rich congruity between syntax and story — in other words, writing with intention.

I teach my students to understand writing conventions well enough to stray from them in deliberate, nuanced, powerful ways. ChatGPT can understand grammatical conventions, imitate them and break them on command. But because it has no intention, it can’t be purposeful in how it adheres to or strays from the rules. Its arguments are built not from critical thought, but from predictive phrase sequences — it doesn’t “know” what its passages are really about, only that their words tend to occur together, so it lacks the meta-cognitive awareness needed to selectively break the norms of composition to add layers of meaning. 

When I command ChatGPT to “Write the opening to a novel set in the current moment using a series of repeated clauses,” it writes:

In the midst of chaos, where uncertainty lurked around every corner, where hope and despair danced an intricate tango, where time seemed both fluid and stagnant, a new chapter began. A chapter etched with the echoes of a pandemic that had reshaped the very fabric of our lives, where masks adorned faces like silent guardians, where distance became the bitter mediator between loved ones, where solitude embraced us with its relentless grip.

The result isn’t terrible, but it has an overly familiar feel; the “new chapter” and “fabric of our lives” are tired phrases, and the repetition is not as musical or energetic as the Dickens opening. Elevating the writing with textual layers and nuanced opportunities for close reading would take word-by-word human editing.

“ChatGPT can understand grammatical conventions, imitate them and break them on command. But because it has no intention, it can’t be purposeful in how it adheres to or strays from the rules.”

Writers who use AI as a drafting tool might be seen as intentionality remixers, drawing copy from ChatGPT as raw material and weaving purpose into the syntax and language. But so far, the chatbot on its own, or even with the support of savvy prompt engineering, does not excel at this particular task.

Making Intentional Choices

Beyond simply deciding whether to adhere to convention, good writers make countless subtle rhetorical choices. What accounts for these choices — why is one iteration of a sentence better than another that says the same thing with slightly different diction, punctuation or structure? 

The seeming subjectivity of the answer is at the root of what frustrates many students about graded writing assignments and most folks who compose things more intensive than a Slack message — but the subjectivity is the point. 

How we measure successful writing changes depending on what the writer is trying to accomplish, and good writers are flexible enough to adapt to different scenarios. They make intentional decisions around three elements: audience, purpose and context. Who am I writing for, to what end and in what circumstance? Answering these questions can offer guidance on, for instance, when to use a particular word rather than its synonym. 

Consider the Dickens sentence above: If the author were writing for an audience of children rather than adults, he might have used simpler sentence structures; if he were writing an op-ed meant to persuade rather than a novel meant to entertain, he might have avoided the antitheses; and if he were writing a novel in 2023 rather than 1859, he might have used a different cadence and register.

It’s difficult to determine a text’s quality without considering the context in which it was written. In an ambitious attempt to create a universal measure of good writing, regardless of discipline or genre, the American Association of Colleges and Universities (AAC&U) developed a rubric whose categories focus on intentionality. It rewards writing that demonstrates “control” over syntax and mechanics; “attention” (but not necessarily blind adherence) to genre and disciplinary conventions; and “a thorough understanding of context, audience, and purpose that is responsive to the assigned task(s) and focuses all elements of the work.” 

In other words, good writing isn’t about sophisticated sentences or complex ideas; it’s about unifying all elements into a coherent whole. You can write a poignant, lyrical, oblique sonnet about the rain, but if your purpose is to inform newspaper readers about the weather forecast, that’s not good writing.

ChatGPT produces weaker writing when it hasn’t received instructions about audience, purpose, and context, and must “decide” for itself what they are. Its writing improves as a user gives it more information about who it’s writing for, why and in what situation. But as these elements get more specific, it becomes harder to tell ChatGPT enough for it to generate an adequately tailored response. 

The chatbot can write passable essays for standardized tests because the purpose and context are so general — they need to be for humans to produce texts that can be compared and ranked in an equitable way. 

But in a highly specific context like a novel or a letter, ChatGPT can’t know enough to create sufficient nuance. Writing a prompt with all relevant information would be nearly impossible, and suboptimal for a technology meant to optimize our time. For creative, expressive, or exploratory writing tasks, using ChatGPT is like supervising a bumbling assistant who needs painfully detailed, step-by-step instructions that take more effort to explain than to simply do the work yourself.

Pinning Down Voice

We often say that good writing has a strong sense of voice. Speaking voices can be recognized from their tone and pitch, but what rhetorical features define a writer’s voice on the page? 

I sometimes ask students to underline selections from their drafts that they believe represent their voice. Sometimes they notice patterns or tics, stylistic quirks, a repeated word or sentence structure. Some highlight sections in which they convey strong opinions or a particularly well-defined point of view. Sometimes they label whole drafts as their voice — after all, they wrote it. 

Others cannot find their voice at all — it was a class assignment, so they were writing in the voice of their academic alter-ego. Those who lack confidence sometimes point to grammatical errors as examples of their voice. Their wide-ranging answers showcase how difficult it is to pin down what makes a distinctive voice.

“For creative, expressive, or exploratory writing tasks, using ChatGPT is like supervising a bumbling assistant who needs painfully detailed, step-by-step instructions that take more effort to explain than to simply do the work yourself.”

It’s a complex equation that’s impossible to catalogue definitively: Voice manifests as degrees along the scales of rhetorical and stylistic qualities — whether the register is more on the formal or informal side; whether sentences are long and varied or short and repetitive; whether the diction is simplistic or sophisticated. 

Voice also comes through in the specific balance of scene and exposition, discourse and metadiscourse and in the specific argumentative angles the writer tends to take. To complicate things further, a writer can have many voices at once — a novelist who writes in the first person, channeling their narrator’s voice, is often still recognizable even through the mask.

Can ChatGPT teach us anything about what makes writing sound like one person versus another? The program is a masterful ventriloquist — its ability to imitate style is one of its most impressive and delightful features. It does so by using “unsupervised learning” to detect rhetorical patterns from its massive database of various kinds of writing, without being told what to look for.

The frustrating part is that it can’t tell us precisely what it notices — it can only deliver text that imitates these patterns, often with startling aptness. It can write recognizably in the voice of any number of characters, real or imagined, historic or contemporary, from Oprah to Jane Austen, Holden Caulfield to Matthew McConaughey, and can emulate the style of texts from the Bible to a Fox News comments section to a wedding toast. 

When I input the prompt, “Write a speech about potatoes in the style of Donald Trump,” ChatGPT’s response sounds like the script from a “Saturday Night Live” sketch: “Folks, let me tell you, nobody loves potatoes more than me, believe me. I’ve been eating them my whole life. Best thing you can put on your plate. And let me tell you, our farmers, they grow the best potatoes. The best. They’re huge, they’re beautiful, they’re red, white, and yellow.”

What exactly makes this language sound like Trump — the content? The syntax? The colloquial diction — “folks” and “let me tell you”? The rhythm and repetition of “They’re huge, they’re beautiful, they’re red”? All the above? What’s striking about this example is that ChatGPT is not so much imitating Trump’s voice as exaggerating its features into a caricature, almost as if the chatbot has picked up on the man’s very essence.

Good character impressions delight us because they illuminate something fundamental about the person they imitate. Alec Baldwin claimed that the secret to his masterful Trump impression was to unlock an element of his character: “You can kind of suggest the voice, or suggest the way they look, but you’ve got to try to think of who he is,” Baldwin said in an interview. “And to me, Trump is someone who is always searching for a stronger, better word, and he never finds it.” Somehow, ChatGPT gets to that core in its potato speech, repeating the word “best” three times in three sentences.

William Zinsser, in his classic book, “On Writing Well,” explains that “we express ourselves as we do” because of the “subconscious mind.” Perhaps ChatGPT’s deft impressions show us that our language patterns reveal more about our character than we might realize. And its facility at imitating style has implications for copyright — to what extent should we view the rhetorical tendencies that make up one’s writing voice as proprietary?

Some writers seem unsettled when faced with AI renditions of their own style. Douglas Hofstadter, author of “Gödel, Escher, Bach: An Eternal Golden Braid,” noted that GPT-4, when prompted to write in his voice, produced what he termed a “Hofstader façade,” or a series of “vague generalities that echo phrases in the book” rather than a seemingly authentic replica of his writing style. And songwriter Nick Cave called ChatGPT’s attempt to write lyrics in his style “a grotesque mockery of what it is to be human.”

But the technology’s capacity for imitation will likely continue to improve: New Yorker staff writer Kyle Chayka observed that ChatGPT was not very effective at mimicking his own writing voice, but the AI startup, Writer, created a bot trained on his own writing to produce text in his voice that, while not perfect, was still “unnervingly effective.” Chayka expressed mixed feelings about this capability: “The robot has made me acutely self-conscious. I recognize my A.I. doppelganger, and I don’t like it.”

“Perhaps ChatGPT’s deft impressions show us that our language patterns reveal more about our character than we might realize. And its facility at imitating style has implications for copyright — to what extent should we view the rhetorical tendencies that make up one’s writing voice as proprietary?”

When a human mimics the style of another writer, the imitation can be seen as flattery, elevating the original writer’s work through its homage; when an AI does it, the act feels more like a flattening, a reduction of our voices to its discovered patterns of idiosyncrasies and tics, our variability and range limited in favor of recognizability.

Intimacy Through Writing

What is the voice of ChatGPT? When it lacks instructions to imitate a particular voice, it presumably imitates all of us, averaging our voices together into an indistinct default. Conversing with the chatbot feels like encountering someone you recognize but have never met — a voice of the masses, distant yet familiar. 

(But of course, it’s not really the voice of the masses — the algorithm inherently prioritizes the writing patterns of those that have published most often, letting them dominate over underrepresented groups and writing styles.)

When speaking as itself, the chatbot sounds neutral, unanimated, optimistic, but not especially enthusiastic to be talking with you. It often opts for lists sandwiched between a clear introduction and conclusion. 

When asked to comment on controversial or debatable issues, it resists taking a strong stance, instead describing tradeoffs or multiple points of view; although, it’s also been reported to have liberal-slanting political views. Its tendency to explain things in a flat, monotone style can make it sound a little condescending, yet it also apologizes when corrected — it seems to want to be helpful.

But something critical is missing from its voice: a certain sense of connection. At its core, writing is about creating intimacy between writer and reader. It’s a relational act, not a one-sided performance, and its power is in the exchange of ideas. It’s the closest we can get to inhabiting the mind of another human, the closest to escaping our own egos. 

“So what?” is the common refrain of writing teachers. “Why should your reader care?” A key way that good writing achieves connection is by creating stakes, or engaging the reader by showing them why your ideas matter. 

Fiction creates stakes through establishing consequences — what does the protagonist stand to lose in their journey? Argumentative writing creates stakes by establishing significance — what are the implications for the individual or society? The stakes don’t need to be high, but for the writing to be engaging, they should be urgent and viscerally apparent.

Stakes are the parts that reach up off the page and out into the world to connect with the reader, to shift their interior state, to make us want to keep reading. Without emotional stakes, even virtuosic texts can feel difficult, off-putting or cold; the emotional payoff is low relative to the energy they take to read. 

We appreciate literary prowess, but engaging the reader matters more — we seem to want more than just spectacle from good writing. As readers, we need to feel like the writer is paying attention to us, trying to connect. ChatGPT cannot build a real connection with its reader — it can only imitate one.

Reading ChatGPT’s writing feels uncanny because there’s no driver at the wheel, no real connection being built. While the machine can articulate stakes, it is indifferent to them; it doesn’t care if we care, and somehow that diminishes its power. Its writing tends not to move us emotionally; at best, it evokes a sense of muted awe akin to watching a trained dog shake a hand: Hey, look what it can do.

Narrative writing tends to become more relatable and engaging as it gains specificity — but ChatGPT is a collective voice, not a specific one, and so even as the algorithm improves, we will likely continue to find its writing emotionally inadequate. Perhaps the ineffable spark of good writing and the spark of a romantic connection are related — both involve a certain energy exchange, a sense of connection across individual minds, a balance of surprise and familiarity.

ChatGPT gets closest to achieving a sense of spark when asked to juxtapose incongruous topics and styles. When commanded to compose a sonnet about McDonald’s, for instance, it writes:

In golden realms where cravings find their bliss,

Where hunger’s fire meets flavor’s sweetest kiss,

There stands a haven with arches of gold,

McDonald’s, where delights and dreams unfold. 

The writing isn’t Pulitzer-worthy, but it has a certain energy that perhaps stems from the surprise of seeing how the program tackled the prompt’s challenge — you can’t help but feel like ChatGPT is in on the joke. It’s almost as if a personality starts to form — a little cheeky, willing to embarrass itself to make us laugh.

“As readers, we need to feel like the writer is paying attention to us, trying to connect. ChatGPT cannot build a real connection with its reader — it can only imitate one.”

“You silly humans,” the chatbot seems to be saying. “Using the greatest technology of our generation to create funny memes. But okay, I’ll play along.” These moments convey a sense of energy that makes it hard for me to believe I’m not chatting with a sentient being. Perhaps having a strong voice simply means writing in a way that makes you seem alive.

If this poem were written by a human, its voice probably wouldn’t have the same strength — it might feel cheesy and oddly reverential of the fast-food chain. Once we know ChatGPT generated the poem, however, its quality improves — we get the feeling the technology is unwittingly commenting on our world, illuminating the categories we use to understand it. 

The McDonald’s sonnet isn’t interesting as a poem — it’s interesting as the output of an algorithm programmed with knowledge of our writing and our world. Perhaps AI-generated writing has the potential to be interesting or meaningful in contexts where the chatbot’s lack of awareness and intentionality matters; when the fact that the machine is not sentient amplifies the impact of its output; when the writing is, in some sense, about AI-generated writing. 

But AI-generated writing about AI-generated writing is a narrow niche and there are limits to how long we’ll find it compelling.

Ethics Of Plagiarism

We tend to believe that good writing is original and thus advise writers to avoid clichés — phrases used so often we’ve come to see them as unoriginal and thoughtless. Clichés spring to mind too easily, careening along well-paved neural pathways, whereas original phrases must be pulled from the quicksand of our brains with significant effort. 

We scorn clichés not because they’re bad descriptions — indeed, the reason they linger is probably because they’re pretty decent — but because their familiarity is off-putting, a sign of writerly laziness. Even when ChatGPT doesn’t use clichés, its writing still echoes of them; there’s usually the sense that there might be a fresher, more original way to say things.

Good writing, we believe, not only avoids phrases taken from the general consciousness, but also avoids language taken from individual writers unless acknowledged by quotation marks, and it credits others for their ideas with citations. 

We expect students to read widely and build arguments that use others’ texts as support for their own. But to maintain so-called “academic integrity,” they must do this using fresh language and draw explicit distinctions between their own ideas and others’ — with the exception of information that is considered general knowledge. But what is general knowledge in a world where virtually any information is freely accessible online? 

ChatGPT, in a sense, plagiarizes our voices as it parrots the writing it was trained on. It tends not to cite the specific sources it synthesizes to craft its phrases, and when it does, they are unreliable — the MLA Style Center website cautions writers to “vet” any secondary sources that appear in AI-generated text, as the programs have the occasional tendency to “hallucinate” false sources and provide information of questionable accuracy. Given the opacity of the AI’s sources, a student who tries to pass off AI-generated text as their own may be inadvertently performing a multi-dimensional transgression, plagiarizing an AI that itself is plagiarizing others.  

The ethics of training AI on copyrighted materials are murky, too. Platforms like Reddit are pushing back against AI developers’ use of their content, and Sarah Silverman and other authors recently sued OpenAI for electronically ingesting illegally uploaded versions of their books from the internet to use as training data for ChatGPT. The Writers Guild of America, on strike since May, seeks to regulate the use of AI, both by preventing human-authored scripts from becoming AI training data and limiting AI tools in the writer’s room. 

But if generative AI becomes as widely adopted as the Google search engine, will authors still want to opt out of contributing to it, or will serving as a model for the algorithm become a way to amplify their own literary influence, an honor akin to being ranked at the top of a Google search result? Should we work to protect the right to be excluded from AI training data, or the right to be included in it? 

So far, universities mostly seem to categorize the unacknowledged use of AI as traditional plagiarism and continue to treat it as unacceptable, with students receiving Fs or even suspensions if caught. While software has been developed to detect AI-generated text, ChatGPT’s chameleonic potency makes the accuracy of these programs questionable.

“Writing is more than simply a sequence of words — it’s synonymous with thought. To steal others’ language is to take not just their words but also their ideas, the essence of who they are.”

Their relevance is debatable, too — what’s the value we’re trying to preserve by differentiating human writing from AI? Is it really plagiarism not to cite an AI-generated phrase? Is plagiarism still the crime we think it is?

Perhaps the ethics of using generative AI depend, again, on intention. An anonymous professor recently queried The New York Times Magazine’s “The Ethicist” advice column about whether it was ethical to use ChatGPT to generate administrative reports and proposals. Kwame Anthony Appiah’s response differentiated between writing that aspires to be original versus that which does not. He authorized the letter writer to use the tool with the rationale that “many administrative documents, though they may have signatories, aren’t thought of as original compositions,” pointing out that these texts often use templates as a starting point anyway. 

But this distinction troubles me: How can we tell what kinds of writing are meant to be original and which are not? What exactly does originality mean? More practically, why do so many workplaces ask us to produce such unoriginal texts – what kind of value do they produce? 

Writing As Thought

Historically, we have viewed plagiarism as an egregious offense. At most universities, students caught plagiarizing receive Fs or even suspensions, and outside academia, the act can result in book recalls and career-ruining embarrassment. 

Why do we consider it so disgraceful? Because we believe writing is more than simply a sequence of words — it’s synonymous with thought. To steal others’ language is to take not just their words but also their ideas, the essence of who they are.

Our tendency to conflate writing with thought is why text-generating AI programs like ChatGPT give off the impression of sentience while image-generating programs do not, and why large language models are the primary targets of recent petitions to slow and regulate AI advancement. If ChatGPT can write coherently, our intuition tells us, surely it can think — and if it can think, how can we possibly maintain control over it? 

I like to say that I’m not teaching my students how to write — I’m teaching them how to think; how to be observant; how to question the systems around them; how to interpret and build meaning; how to relate to others; how to understand and differentiate themselves; how to become agents of change. But ChatGPT, by producing competent writing with apparent thoughtlessness, threatens the idea that critical thinking is the core of good writing.

With its startling ability to regenerate responses by paraphrasing the same ideas in new words ad infinitum, it mocks the weight we put on paraphrasing to avoid plagiarism. We task students with summarizing texts in their own words to demonstrate their understanding of the material — but ChatGPT shows us that it’s possible to explain others’ ideas without understanding them; to build arguments from their content without metacognition.

Its revelation is reversing how we tend to think writing works: First, you come up with an idea. Second, you find the words to articulate it. But ChatGPT inverts this process. It begins with the words and builds its arguments and narratives based on language patterns, letting its ideas emerge from the text it uses to produce them.

When I write, I imagine reaching up to a higher plane to access my thoughts and assign words to them, yet the words I find are approximate and never capture the ideas as fully as they appear in my mind. This is why writing feels so hard to me: Its labor involves grasping for a perfect translation and always falling short. But ChatGPT reminds us that language is a lens, and our thoughts and perceptions are almost certainly shaped in some way by its conventions and metaphors. It shows us that writing influences our thoughts even as we use it to describe them. 

We tend to view writing as hyper-personal, a conduit for our unique thoughts. But ChatGPT, through its own training, reminds us that we learn to write through imitation, the same way we learn to smile or eat or walk. Children grow up speaking with the accent of their peers, not their parents; in the same way, writing is a networked, communal act, inseparable from others’ writing.

We write in conversation with what we read, and good writing balances our own words with others’. We summarize their ideas, using them as springboards and support for our arguments. We take language from others, too, and not just as quotations: The English language is a colonial artifact that swallows up other languages. It’s full of stolen words and idioms and familiar, tired phrases — things we say because others say them.

“Perhaps the ineffable spark of good writing and the spark of a romantic connection are related — both involve a certain energy exchange, a sense of connection across individual minds, a balance of surprise and familiarity.”

As AI technology progresses, we will need to reconsider our conceptions of authorship and intellectual property. If you command unique text into existence by inputting a prompt into ChatGPT, have you authored that text? If ChatGPT edits or rewrites your work, is it still yours? If not, what kind of attribution would be valuable? Does the premise of intellectual property still hold water in a world with generative AI? Where’s the line between a “fair use” iteration of someone else’s text — a creative remake of a Shakespeare play, for instance — and more problematic appropriation? 

Large language models challenge our understanding of originality and ask us to reexamine what value it adds to good writing. Is an original thought in the kernel of the idea or its phrasing? What makes a phrase original — the novelty of a word sequence, the context of its use or the readers’ perception of it? Are there any new ideas, or just new ways of saying them? Can AI generate original ideas by remixing old phrases? How original should good writing be, regardless of whether the author is human, AI or a combination?

Costs Of Optimization

It’s an incongruous time to be alive. We’re watching technology evolve in a more sophisticated way than we can understand or even track, and yet, our environmental, political and economic systems feel on the verge of collapse. We now live in a world where a free online program can generate convincing human-like prose more quickly than any human can read or even think. 

Meanwhile, about one in 10 people worldwide — 773 million adults, mostly women — are illiterate. Almost a third of the world’s population, or 2.9 billion people, have not been on the internet in the past three months, if ever. Let’s not forget that optimization tools mostly impact the productivity of those with access to them.

Even if writing quality isn’t optimized, these tools save us time and effort — and the savings are significant. While immersed in the process of writing, you might experience a “flow” state in which the task of assigning words to thoughts becomes so absorbing that your sense of time passing falls away — but meaningful time gets lost when we spend time reflecting and writing instead of accumulating lived experiences. Life is short and writing time has a real cost: The hours I took to compose this piece could have been spent with family or friends, volunteering or working on any number of other projects or hobbies.

I’ve heard writer-parents say having a baby is equivalent to losing two books’ worth of time. And time is worth money: Full-time writing is a privilege that few can afford, with most writers stealing scraps of time between day jobs. 

Optimizing the effort involved with writing is no small thing, either. The process requires an unparalleled level of focus, and for many it ushers in feelings of inadequacy and self-doubt. We take writing failures personally because we see writing as thought, so failing to express ourselves well in writing can sting more than other forms of expression. As a result, many end up terrified of the blank page, and AI becomes a tempting corrective.

Automation promises to accelerate time-consuming tasks and make us more productive. But when it comes to writing, do we need more productivity? If chatbots become widely adopted, who’s going to read all that text? 

Anyone who’s published knows that readership is a rare gift. Reading is work — valuable work — but like writing, it requires exertion and takes time away from other tasks. Many of us already feel saturated with content; we consume so much information through screens that our daily attention spans feel fragile and limited. There’s a certain respect we hold for writers who are careful not to publish too much, who honor their readers enough to self-censor and share only what’s really worth our attention. 

And what will become of our own writing after reading so much AI-authored prose? Will we begin to write more like ChatGPT in a linguistic mise en abyme? Will we lose the sense that reading and writing offer a solution to loneliness, the chance to connect deeply with another human’s inner world, given the growing uncertainty about whether a human is even present behind a given text?

Perhaps something meaningful is lost when we use AI to reduce the time and effort spent writing. Writing well takes practice, and I see the most significant progression in students who spend the most time writing, reworking draft after draft. To “essay” is to try — perhaps good writing is about trying, about process as much as outcome. 

If AI becomes commonplace, perhaps we will wind up most admiring those writers who proclaim to do the hard work of generating their own prose from scratch, slowly and painfully. Like organic produce or designer handbags, perhaps human-authored text will someday carry labels certifying its authenticity. Or maybe our preference for the real over the artificial will simply fade as differentiating between the two becomes increasingly difficult.

But even if a reader can’t be sure whether a text had AI support, the writer knows, and producing unassisted writing can feel deeply gratifying. We run marathons and climb mountains for the sake of it; because they’re hard. Maybe the parts of writing that feel so burdensome — the effort to think deeply, to sit still with our thoughts, to articulate them and revise them until they say exactly what we want, until we figure out what we’re trying to say at all — are the parts that we value when we praise good writing. 

Perhaps the time spent writing matters as much as having written; there is a vague sense of being, in the moment of writing, the most authentic version of yourself.

Writing To Differentiate Ourselves

What makes good writing in a world with generative AI? Perhaps writing classes of the future will lean into the subtle ways in which human writing surpasses AI-generated writing and challenge students to write better than the machine. 

Perhaps they will teach students to be AI curators and remixers, teaching the prompt engineering skills they need to leverage the technology most effectively, in preparation for the kind of sparkless, functional writing they will produce post-graduation — contracts, reports, meeting minutes, instructional manuals. 

Perhaps the college essay will be retired in favor of other assignments that demonstrate knowledge, critical thinking and argumentation skills — speeches, hands-on activities or multimedia creations. It seems likely we will continue to teach students to read widely and study textual patterns and conventions closely — the same way we train AI to write.

But perhaps ChatGPT also shows us that at a certain point, reading has diminishing returns. Maybe we also need to be trained on other kinds of data in order to write well, data that comes from being alive in the world over time, from accumulating enough experience to differentiate our own voice from others. 

Writing courses are different from other disciplines — they’re not so much about transferring knowledge or conveying conceptual frameworks. Instead, they aim to create a space in which students can practice differentiating themselves.

Teaching writing usually involves one-on-one meetings, a structure that allows me to build relationships with students. Through reading their writing and talking about their ideas, I watch them begin to fill in the templates of their personalities and experiment with specificity, trying out various attitudes, disciplines, hobbies, relationships, styles and habits, and in the process, becoming themselves. They hone their craft not only by learning about writing conventions but also by learning about who they are and what they think, by figuring out how their perspectives align and differ from others’.

Generative AI complicates this mission, but it doesn’t terminate it. The division between the words and ideas that belong to us versus others has always been more ambiguous than we’d like to think, and ChatGPT blurs that line further. Even if we use AI to make writing feel easier, we still need to do the hard, lifelong work of becoming ourselves.

To write well, you need the specificity of perspective that comes from communicating critically with others over an extended time. AI might make writing faster, but figuring out who we are in relation to others cannot be accelerated. 

We may see writing as equal to thought, but it is also synonymous with power. Allowing AI to write for us gives away our power and the opportunity to assert control over the way we represent ourselves to the world.

As we continue to debate whether using AI violates university policies on academic integrity, I am reminded that the word “integrity” means not just honesty and moral uprightness, but also wholeness, an integration of parts. Education is about recognizing the limits of our own subjective impressions, inhabiting others’ perspectives, and aligning them with our own. Perhaps writing with integrity means integrating our selves with others through the act of writing to form a coherent whole.

The tension between the individual and the collective, between novelty and familiarity, drives the arc of our lives. We are conceived from the text of others’ DNA and emerge with our own combinations. We grow up learning to imitate those who raise us, then rebel from them as we battle to find ourselves amid the influences of society. We use writing to differentiate ourselves, to respond to others, weaving their words with our own, synthesizing their ideas, adding new ones and exploring where we align and diverge.

Beyond mirroring elements of our voices, maybe AI also mirrors the tensions we feel between ourselves and others. Perhaps large language models, if we interact with them critically, will open new frames through which to explore the balance between the ways we conform and the ways we break free, adding depth to the mission of self-discovery that defines our lives.

The post What AI Teaches Us About Good Writing appeared first on NOEMA.

]]>
The Illusion Of AI’s Existential Risk https://www.noemamag.com/the-illusion-of-ais-existential-risk Tue, 18 Jul 2023 16:25:37 +0000 https://www.noemamag.com/the-illusion-of-ais-existential-risk The post The Illusion Of AI’s Existential Risk appeared first on NOEMA.

]]>
It is a well-known Hollywood plotline: the rise of superintelligent AI threatens human extinction. So much so that the release of publicly-available AI, like ChatGPT, has led to a frenzy of concern. On May 30, the San Francisco-based research group, the Center for AI Safety, released a succinct statement signed by some of the field’s top experts, stating that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

But focusing on the possibility of a rogue superintelligence killing off the human species may itself be harmful. It could distract regulators, the public and other AI researchers from work that mitigates more pressing risks, such as mass surveillance, disinformation and manipulation, military misuse of AI and the inadequacy of our current economic paradigm in a world where AI plays an increasingly prominent role. Refocusing on these present concerns can align the goals of multiple stakeholders and serve to contravene longer-term existential risks.

There are all sorts of ways in which AI systems could accidentally cause or be implicated in the death of many, potentially even millions, of people. For example, if AI were incorporated into autonomous nuclear strike technology, unexpected behavior on the part of the AI could lead to drastic consequences. However, these scenarios don’t need to involve superintelligent AI; in fact, they are more likely to occur with flawed, not-so-intelligent AI systems. (For example, the doomsday machine in Dr. Strangelove was as simple as could be: “a clearly defined set of circumstances, under which the bombs are to be exploded, is programmed into a tape memory bank.”) Mitigating problems with flawed AI is already the focus of a great deal of AI research; we hope and expect that this work will continue and receive more of the attention it deserves.

Still, a discussion about rogue superintelligent AI could be useful in at least one way: It draws the attention of policymakers and the general public to AI safety — though the worry remains that using such an emotive issue in this way may backfire.

That’s because extinction from a rogue AI is an extremely unlikely scenario that depends on dubious assumptions about the long-term evolution of life, intelligence, technology and society. It is also an unlikely scenario because of the many physical limits and constraints a superintelligent AI system would need to overcome before it could “go rogue” in such a way. There are multiple natural checkpoints where researchers can help mitigate existential AI risk by addressing tangible and pressing challenges without explicitly making existential risk a global priority.

Our foremost concern should be preventing massive, unnecessary suffering of the sort that we know is possible given existing and soon-to-exist technologies. Rigorous studies of real and present AI-induced harms have been published, and potential solutions have been proposed.

For example, facial recognition technology can be used for tracking individuals and limiting basic freedoms, and generative image technology can be used to create false images or videos of events that never happened. To address these issues, calls to action have been made, including the Montreal Declaration on Responsible AI and the World Economic Forum’s Presidio Recommendations on Responsible Generative AI.

We can and should forecast AI developments in the near term, and deliberate about their potential harms. But as we forecast farther into the future of this rapidly advancing field, the unknowns grow exponentially, which makes planning around such forecasts impractical. It is more constructive to highlight real challenges and debate proposed solutions rather than steer public discourse toward hypothetical existential risks.

Reality Check

What would actually have to happen for the prospect of extinction by a rogue AI to change from being a purely hypothetical threat to a realistic threat that deserves to be a global priority?

Harm and even massive death from misuse of (non-superintelligent) AI is a real possibility and extinction via superintelligent rogue AI is not an impossibility. We believe the latter is an unlikely prospect, though, for reasons that will become clear in examining the potential paths to extinction by a rogue, superintelligent AI.

To do so, we must first operate under the assumption that superintelligence is possible, even though that is far from a consensus view in the AI community. Even defining “superintelligence” is a fraught exercise, since the idea that human intelligence can be fully quantified in terms of performance on a suite of tasks seems overly reductive; there are many different forms of intelligence after all.

“It is more constructive to highlight real challenges and debate proposed solutions rather than steer public discourse toward hypothetical existential risks.”

It seems safe to say that current AI is not superintelligent, although it has already surpassed human performance at many tasks, and is likely to do so at many more in the near future. Today’s AI models are very impressive, and arguably they possess a form of intelligence and understanding of the world. They are also easily fooled, “hallucinate” falsehoods, and sometimes fail to make critical logical inductions, such as causal inferences.

Still, for the sake of argument, let’s suppose that the impressive speed at which AI is advancing addresses these shortcomings and, at some point in the future, results in the emergence of a general superintelligence, e.g. an AI that is generally better than humans at almost any cognitive task.

Even then, we highlight a number of checkpoints that exist along any potential path to extinction from rogue AI. These checkpoints are red flags that would help identify when the hypothetical risk becomes more pressing and may need to be prioritized.

Requirements for Speciocide

Discussions about AI’s existential risk often suggest that a superintelligent AI would cause our extinction because more intelligent species “naturally” cause the extinction of less intelligent species.

It is true that in Earth’s history, there are examples of one species causing the extinction of another, less intelligent species; extinctions caused by humans are most often cited. (We are, in fact, unaware of any nonhuman example.)

However, superior intelligence is not the key determinant in such events; there have been many instances of less intelligent species causing the extinction of more intelligent ones. For example, in the Late Devonian, the rapid diversification of plants and the changes to the atmosphere that they induced is believed to have been a cause of one of Earth’s mass extinctions, resulting in the loss of three-quarters of all species, many of which were likely more intelligent than plants.

More broadly, interspecies extinction is not a result of some competitive battle for dominance between two species. The idea of species forming a hierarchy or “Great Chain of Being” is inaccurate; in reality, relationships between species are complex and form a web or graph of mutual interdependence with no “top” or “bottom.” When biologists talk about “dominance” in animal interactions, they usually apply definitions that focus on relationships between individuals of the same species.

Characterizations of evolution as being about interspecies competition and selfishness are a misrepresentation of what evolutionary biology tells us and may be rooted in our own unique phylogenetic history as primates — and patriarchal assumptions. In general, mutualism and cooperation between species are very likely to emerge from the pressures of natural selection.

What we know about extinction events tells us that they are generally caused by changes to the environment, and when they are a result of one species’ impact on another, extinction is induced in one of three ways: competition for resources, hunting and over-consumption or altering the climate or their ecological niche such that resulting environmental conditions lead to their demise. None of these three cases apply to AI as it stands.

AI is not competing for resources with human beings. Rather, we provide AI systems with their resources, from energy and raw materials to computer chips and network infrastructure. Without human inputs, AI systems are incapable of maintaining themselves.

If mining, global shipping, and trade of precious metals, building and maintenance of power plants, chip-building factories, data center construction, and internet cable-laying were all fully automated — including all of the logistics and supply chains involved — then perhaps a superintelligent AI could decide that humans are superfluous or a drain on resources, and decide to kill us.

For now, AI depends on us, and a superintelligence would presumably recognize that fact and seek to preserve humanity since we are as fundamental to AI’s existence as oxygen-producing plants are to ours. This makes the evolution of mutualism between AI and humans a far more likely outcome than competition. Moreover, the path to a fully automated economy — if that is the goal — will be long, with each major step serving as a natural checkpoint for human intervention.

Such automation would require major advances in hardware as well as software. But robotics is not developing at a pace that’s anywhere close to AI’s — and it is unlikely to, since AI’s accelerated progress is tied to the digital world, where computational power grows exponentially, copying is nearly instantaneous, and optimization is automated.

“For now, AI depends on us, and a superintelligence would presumably recognize that fact and seek to preserve humanity since we are as fundamental to AI’s existence as oxygen-producing plants are to ours.”

A scenario where a superintelligent AI decides that humans are a drain on resources and should be eliminated, rather than a key source of its support, depends on technologies and economic structures (e.g. completely automated production cycles, from raw material extraction to advanced manufacturing) that don’t exist and are unlikely to exist for the foreseeable future. 

AI cannot physically hunt us. A superintelligent AI could, in theory, kill large numbers of human beings if it had autonomous control over weapons of mass destruction. However, this scenario also provides humans with natural checkpoints.

If the world’s governments are actively building autonomous weapons systems with mass destruction or bio-warfare capabilities we should indeed be ringing alarm bells and doing everything we can to stop them. Such a scenario would be dangerous with or without superintelligent AI; arguably, using autonomous AI with limited intelligence for military applications could be just as concerning.

AI’s impact on the climate is up to us. Environmentalists have raised concerns about AI’s carbon footprint in an era of ever-larger models that can take months to train in power-hungry data centers. These concerns must be taken seriously, given that accelerating climate change is a real civilizational — perhaps even existential — risk. We must rise to the challenge of developing clean power, or risk catastrophe, regardless of AI.

However, the carbon emissions of AI training and inference today are minuscule compared to the computing sector as a whole, let alone the more carbon-intensive activities of our industrial civilization, such as construction and transportation, which together account for more than a third of our total carbon emissions.

While the use of AI may continue to grow dramatically over the coming years, its energy efficiency will also likely continue to improve dramatically, making it unclear whether AI will ever be among the most significant carbon emitters on a global scale.

It is also worth noting that the current best approach to developing generalist AI is “pre-training” large foundation models like GPT-4 or PaLM, which are not specific to any one task but can be quickly adapted to a variety of uses. While pre-training may be energy-intensive, it need only be done once, replacing the narrow, per-task training required by previous generations of AI.

The emerging generation of general-purpose, multimodal AI will be capable not only of modeling human language, but many other complex systems; such AI will likely play an important role in clean power generation, climate analysis and climate change mitigation. More to the point, any expansion of AI infrastructure, or effectively AI’s energy footprint, is another checkpoint under human control. After all, data centers do not build themselves.

One could argue that a superintelligent AI system could manipulate humans into building power plants or deploying weapons on its behalf. That is, they need not do it themselves. However, the most obvious approach to addressing this concern lies in focusing on the real and present dangers of AI social engineering (today, often at the behest of human scammers), or mitigating the risk of humans falling prey to coherent-sounding hallucinations.

Just as we train people and develop software tools to combat phishing, we should invest in AI literacy and technology to combat manipulation and misinformation by AI systems. In contrast, while predicting the manipulation tactics of superintelligent AI might be intellectually interesting — and could suggest avenues for further research — it doesn’t offer concrete mitigation strategies or regulatory steps.

In sum, AI acting on its own cannot induce human extinction in any of the ways that extinctions have happened in the past. Appeals to the competitive nature of evolution or previous instances of a more intelligent species causing the extinction of a less intelligent species reflect a common mischaracterization of evolution by natural selection.

The Problem Of Doomsday Speculation

If potential existential risks from a rogue superintelligence are so bad, don’t we have a duty to future generations to address this possibility, no matter how unlikely?

This question is akin to an AI version of Pascal’s wager: the potential consequences of not believing in God are so bad — eternal damnation — that it’s just more rational to believe in God regardless of whether God really exists. Pascal’s wager ignores the fact that we should be considering probabilities in addition to potential outcomes. Yes, going to hell with rogue-AI-induced extinction is terrible, but if it is a very unlikely outcome, then it could be worse to focus our efforts on preparing for it if that leads us to make choices we otherwise wouldn’t.

“If potential existential risks from a rogue superintelligence are so bad, don’t we have a duty to future generations to address this possibility, no matter how unlikely?”

For example, if one accepted Pascal’s wager at face value, it would be more logical to devote one’s entire existence to religion and become a monk focused on getting into heaven than it would be to concern oneself with Earthly things like community, family and politics. 

Is it really all that different to accept this wager for AI? Unlike the concept of hell, it is scientifically possible that superintelligent AI could emerge and cause human extinction. But, as with Pascal’s wager, it’s important to question one’s priorities. If we really think that superintelligent AI presents a plausible existential risk, shouldn’t we simply stop all AI research right now? Why not preemptively bomb data centers and outlaw GPUs?

Even a more moderate version of rogue AI existential risk concerns might lead us to the conclusion that advanced AI research should be tightly controlled by governments like research into nuclear weaponry. Do we really want any of these outcomes?

We suspect that most AI researchers would say “no.” If so, they don’t accept the AI version of Pascal’s wager — at some level, they recognize that AI-induced extinction is actually a distant likelihood, much like being sent to hell by a vengeful God. They may also recognize that there are indeed checkpoints for human intervention, which means that unlike going to hell, we will know in advance when AI existential risk is on the path to becoming a more credible concern. That is perhaps why most AI researchers are still working in this field and why they likely don’t want it to be regulated the way the nuclear industry is nor have their data centers destroyed.

There are sensible approaches to mitigating existential risk that don’t involve nuclear-level regulations or pseudo-religious fervor. However, human beings and their institutions have finite resources. Governments only pass a certain number of laws each year and cannot tackle every problem at once. Academics have limited bandwidth and cannot consider all potential risks to humanity at once. Funding necessarily has to be directed to those problems in society that we identify as priorities.

Life involves trade-offs like deciding which problems must be dealt with immediately and which can sit on the back burner. For example, a known existential risk that our species could face is a large meteor or asteroid strike. It has happened before. Yet relatively little money and time are being invested on preventing such a catastrophe.

Instead of investing heavily in meteor deflection technology or colonizing other planets, we have decided to concentrate on other challenges, such as the transition to clean energy. Why? Because a meteor strike is a relatively low-probability event and making these efforts a major global priority would divert resources from other, more pressing problems, like climate change.

Pascal’s wager, in both its original and AI flavors, is designed to end any reasoned debate by assigning an “infinite cost” to an unlikely outcome (where “cost” here refers to a measure of how bad the consequences are). In a utilitarian analysis, wherein importance is calculated by multiplying the cost of an outcome by its probability, an infinite cost times any probability other than zero is still infinity. As such, an existential risk can appear more important than any other potential risk that doesn’t involve the total annihilation of our species, despite being low-probability.

Yet this doesn’t reflect the way we prioritize in real life. Are we at the point where extra funding, research and regulation directed toward, for example, the impact of AI on labor, is less important than work on existential risk? Given the diversity of viewpoints on the answer, utilitarian calculations involving infinities aren’t convincing enough to prioritize AI-induced extinction. 

Making AI existential risk a global priority — a term that suggests treating it as one of society’s highest priorities — necessarily implies that we will divert attention and resources from current AI safety concerns, such as mitigating the impact of AI on workers, cybersecurity, privacy, biased decision-making systems and the misuse of AI by authoritarian governments.

All of these risks have been well documented by the AI community, and they are existing risks, not hypothetical ones. In addition, making AI-induced extinction a global priority seems likely to distract our attention from other more pressing matters outside of AI, such as climate change, nuclear war, pandemics or migration crises. 

To be clear, we are not saying that research associated with potential AI existential risk should stop. Some effort in this direction will likely yield immediate benefits. For example, work examining how to imbue AI systems with a sense of ethics is likely beneficial in the short term as are efforts to detect manipulative behaviors that can emerge spontaneously without an engineer’s intent.

“Making AI-induced extinction a global priority seems likely to distract our attention from other more pressing matters outside of AI, such as climate change, nuclear war, pandemics or migration crises.”

AI systems that lack ethics and are capable of human manipulation can cause many potential bad outcomes, including breakdowns in our social fabric and democracy; these risks may not be existential, but they are certainly bad enough.

We can — and must — fund research to understand and prevent such outcomes, but we don’t need to invoke the specter of human extinction or superintelligence to motivate this kind of work. Hence, we are arguing only that existential risk from superintelligent AI does not warrant being a global priority, in line with climate change, nuclear war, and pandemic prevention. We agree that some research into low-probability extinction events is worthwhile, but it should not be prioritized over many other real and present risks humanity faces.

Those calling for AI-induced extinction to be a priority are also calling for other more immediate AI risks to be a priority, so why not simply agree that all of it must be a priority? In addition to finite resources, humans and their institutions have finite attention. Finite attention may in fact be a hallmark of human intelligence and a core component of the inductive biases that help us to understand the world. People also tend to take cues from each other about what to attend to, leading to a collective focus of attention that can easily be seen in public discourse. 

Regulatory bodies and academics intent on making AI beneficial to humanity will, by nature, focus their attention on a subset of potential risks related to AI. If we are designing regulations and solutions with superintelligent AI existential risk in mind, they may not be well-suited to addressing other crucial societal concerns, and we may not spend enough time on developing strategies to mitigate those other risks.

One may counter that it should be possible to design regulations that reduce the potential for AI-induced extinction while also attending to some of the immediate, high-probability AI risks. In some ways, this may be so. For example, we can likely all agree that autonomous AI systems should not be involved in the chain of command for nuclear weapons. But given that arguments about rogue superintelligence focus on hypothesized future AI capabilities as well as a futuristic fully automated economy, regulations to mitigate existential risk necessarily focus on future, hypothetical problems, rather than present, existing problems.

For instance, regulations to limit open source release of AI models or datasets used to train them make sense if the goal is to prevent the potential emergence of an autonomous networked AI beyond human control. However, such regulations may end up handicapping other regulatory processes for promoting transparency in AI systems or preventing monopolies. Similarly, if we make it a requirement for researchers to answer questionnaires about how their work may further existential risk, that may prevent them from focusing on more pressing questions about whether their work is reproducible, or whether models reinforce and amplify existing social biases.

A further example: when AI systems model users’ physical, mental, or emotional states, and especially when models can generate language, audio or video that passes the Turing Test (e.g. can pass as human), a number of issues and avenues for potential abuse arise. Some people may conclude that AI is equivalent to a person or somehow omniscient; in fact, focusing on the ultimate danger of extinction by superintelligent AI could easily feed such beliefs.

Most AI researchers would say a discussion about AI personhood is premature, but should it become a real point of discussion, the ethical, legal and economic implications of such a consideration are vast, and are probably not best framed in terms of existential risk. Neither is superintelligence required to pass the Turing Test, as there exist systems today that can do so over the course of a meaningful social interaction, like a phone call. Hence, if our goal is to begin addressing the risks of AI-powered social manipulation, we should tackle the real, existing problem, rather than hypothesizing about existential risk or superintelligent AI.

Our attention is finite, and there is an asymmetry between existential risk and other AI-associated harms, such that prioritizing existential risk may impair our ability to mitigate known risks. The converse is not true.

Bridging The Divide

Another concerning aspect of the current public discussion of AI risks is the growing polarization between “AI ethics” and “AI safety” researchers. The Center for AI Safety’s statement as well as a recent letter from the Future of Life Institute calling for a pause on experiments with giant AI models are conspicuously missing signatures — and as a consequence, buy-in and input — from leaders in the field of AI ethics.

“Our attention is finite, and there is an asymmetry between existential risk and other AI-associated harms, such that prioritizing existential risk may impair our ability to mitigate known risks. The converse is not true.”

At the same time, many in the AI ethics community appear to broadly critique or dismiss progress in AI generally, preventing a balanced discussion of the benefits that such advances could engender for society. The schism seems odd, given that both communities of researchers want to reduce the potential risks associated with AI and ensure the technology benefits humanity.

Siloing researchers into ideological camps appears to be contributing to a lack of diversity and balance in conversations around the risk of AI. History provides many examples of failures and catastrophes that might have been avoided if a diversity of viewpoints had been considered, or more experts consulted. We have an opportunity to learn from past mistakes and ensure that AI research — especially on imminent and long-term threats — benefits from civil intellectual exchange and viewpoint diversity.

One of the less obvious costs of a polarized dialogue has been the marked absence — or marginalization — of voices that might not fall neatly into either the “safety” or “ethics” camp. For example, indigenous perspectives are rarely incorporated into AI as it stands, but they could help us to develop AI systems that differently model information about the world and forms of cognition, prompting us to think about AI in ways that move beyond the rigid binaries of the human and nonhuman. The continued entrenchment of AI research factions in their separate echo chambers is likely to increase the potential harms of AI and reduce its potential benefits.

Why It Matters

The majority of researchers raising alarms about AI existential risk are likely motivated by real concerns and a sincere desire to mitigate AI-related risks in general. They simply have not considered the unintended consequences of their public declarations.

It is naive to assume that we can publicly raise alarms about superintelligent rogue AI that could kill off the human species without distracting researchers and politicians from other more pressing matters in AI ethics and safety. The nature of superintelligent AI existential risk as a concern is that it is so severe in theory that it could have distorting effects on public understanding, AI research funding, corporate priorities and government regulation.

As it stands, superintelligent autonomous AI does not present a clear and present existential risk to humans. AI could cause real harm, but superintelligence is neither necessary nor sufficient for that to be the case. There are some hypothetical paths by which a superintelligent AI could cause human extinction in the future, but these are speculative and go well beyond the current state of science, technology or our planet’s physical economy.

Despite the recent impressive advances in AI, the real risks posed by such systems are, for the foreseeable future, related to concerns like mass surveillance, economic disruption through automation of creative and administrative tasks, the concentration of wealth and power, the creation of biased models, the use of poorly designed systems for critical roles and — perhaps foremost — humans misusing AI models to manipulate other humans. These are the issues that should be our focus. We need to place greater value on AI safety and ethics research, research to improve our models, regulations to prevent inappropriate deployment of AI and regulations to promote transparency in AI development.

Focusing on these real-world problems — problems that are with us now — is key to ensuring that the AI of our future is one that is ethical and safe. In essence, by examining what’s more probable, we may very well prevent the improbable — an AI-induced extinction event — from ever happening.

The post The Illusion Of AI’s Existential Risk appeared first on NOEMA.

]]>
The World Needs A Global AI Observatory  https://www.noemamag.com/the-world-needs-a-global-ai-observatory Thu, 29 Jun 2023 14:57:57 +0000 https://www.noemamag.com/the-world-needs-a-global-ai-observatory The post The World Needs A Global AI Observatory  appeared first on NOEMA.

]]>
After years of neglecting the development of AI, governments are desperately trying to catch up and work out how to regulate it. At least four different groups are attempting to steer the arguments over how to establish governance priorities. 

The major corporations, not surprisingly, wish to take control of the agenda, promising that agreements between them can prevent the worst abuses. While publicly calling for regulation, they naturally want to minimise any restrictions that might impede their plans and are working on proposals that would just apply to the main incumbents. 

The second group, the leading technologists, have little to say that is practical. Though Americans apparently support a pause in the development of LLMs by a margin of roughly five to one, the technologists have few (if any) ideas about how such a thing might actually be implemented; so far, they have failed to seriously engage with the practical dilemmas of governance.

The third group consists of governments, which have at least moved beyond rhetoric. The European Union has worked on detailed laws that will categorize AI according to risk levels and also require LLMs to disclose their nature, distinguish deep fakes from real elements, block illegal content and require copyrighted material used for training to be identified. Most current models are set to fail these tests. China, meanwhile, has introduced strict rules, for example on deep fakes, and has created a potentially powerful regulator in the Cyberspace Administration of China. The U.K., however, is continuing to hope that existing regulators can cope without any new laws or institutions.

Finally, there are transnational gatherings and bodies, where there has been much vague hand-wringing and a striking lack of real proposals. 

“A Global AI Observatory would provide reliable data, models and interpretation to guide policy and broader decision-making about AI.

Future historians will wonder why so many powerful institutions and intelligent commentators have so dismally failed to generate plausible options. Inevitably, most commentary tries to squeeze the problem into familiar frameworks, whether seeing it as a problem of human or civil rights, copyright or competition law, privacy and data sovereignty, policing and security, or innovation-driven economic growth, with professional bodies wanting to emphasize training and accreditation. None have yet risen up to the scale of the challenge of managing a truly general-purpose technology that is already affecting many areas of daily life. 

And although AI has slowly become more politically visible — whether in the form of marches on the streets of London by students in 2020 or the crisis faced by the Netherlands government that same year thanks to a scandal over welfare payments, or the numerous and increasing examples of bias and distortion in algorithms being used to make often vital decisions — the world of politics is still struggling to frame its response.

So, what can be done? The landscape of global AI governance is bound to be quite complex, with many types of risk and opportunity, many domains and many possible governance responses. Recognizing the inherent complexity of a general-purpose technology is the starting point for action. One-dimensional ideas or solutions are bound to be inadequate.

This table captures a few of the dimensions. Imagine these as axes of a three-dimensional cube with many hundreds of cells, each of which might require a different governance response: 

Harm & threatDomainResponse type
MisinformationMediaLegal liability
BiasPoliticsTransparency, explainability,
provenance of data
Disruption HealthStandards, guard rails,
safety
DisasterWarBans
Economic
impoverishment
FinanceSandboxes, anticipatory
regulation methods
MonopolyTradeLicensing (e.g. foundational
models)
AbuseEducationSoft law, norms, voluntary
codes
DistrustPolicingData/knowledge, public
education

Within a few years, we are likely to have an equally complex lattice of responses, ranging from standards to monitoring capacities, regulations at different levels, legal norms, anti-trust measures and more. My expectation is that the world will create many different types of AI regulators, often with broad powers (since attempting to prescribe in detail won’t work against the pace of change), and often with a remit to discuss and explain the dilemmas to the public.

The great paradox of a field founded on data is that so little is known about what’s happening in AI — and what might lie ahead. No institutions exist to advise the world, assessing and analyzing both the risks and the opportunities.

To address this gap and illuminate a plausible step that the world could take now as a necessary condition for more serious regulation in the future, I have been working with colleagues at the Massachusetts Institute of Technology, the University of Oxford, the Collective Intelligence Project, Metagov and the Cooperative AI Foundation to design what we call a Global AI Observatory (GAIO) to provide the necessary facts and analysis to support decision-making. 

The world already has a model for this: the Intergovernmental Panel on Climate Change (IPCC). Set up in 1988 by the United Nations with member countries from around the world, the IPCC provides governments with scientific information and pooled judgment of potential scenarios to guide the development of climate policies. Over the last few decades, many new institutions have emerged at a global level that focus on data and knowledge to support better decision-making — from biodiversity to conservation — but none exist around digital technologies.

The idea of setting up a similar body to the IPCC for AI that would provide a reliable basis of data, models and interpretation to guide policy and broader decision-making about AI has been in play for several years. But now the world may be ready thanks to greater awareness of both the risks and opportunities around AI.

A GAIO would have to be quite different from the IPCC in some respects, having to work far faster and in more iterative ways. But ideally, like the IPCC, it would work closely with governments to guide action.

Quite a few organizations collect valuable AI-related metrics. Some national governments track developments within their borders, there are businesses pulling together industry data, and organizations like the OECD’s Artificial Intelligence Policy Observatory map what is happening with national AI policies and trends. Yet much about AI remains opaque, often deliberately. It is impossible to sensibly regulate what governments don’t understand. GAIO could fill this gap through six main areas of activity. 

“The great paradox of a field founded on data is that so little is known about what’s happening in AI — and what might lie ahead.”

First, it could set up a global, standardized incident reporting database concentrating on critical interactions between AI systems and the real world. One pressing example is bio-risk, where there is an obvious danger of AI being used to create dangerous pathogens. We need better ways to monitor such incidents. Similarly, examples of misuse of algorithms — such as Rotterdam’s recent issues over welfare payments — would be mapped and documented. A shared database of incidents and risks would pull together the relevant facts about applications, their impacts and metadata. Standardized incident reports are a basic starting point for better global governance and could reduce risks of miscommunication and arms races over AI.

Second, the GAIO could organize a registry of crucial AI systems — again, a basic precondition for more effective governance. It would prioritize the AI applications with the largest social and economic impacts — the ones with the biggest numbers of people affected, person-hours of interaction and the highest stakes. It would ideally also set rules for providing access to models to allow for scrutiny. Singapore already has a registry of AI systems and the U.K. government is considering something similar, but at some point, similar approaches need to become global.

Third, the GAIO would bring together a shared body of data and analysis of the key facts about AI: spending, geography, key fields, uses, applications. There are many sources for these, but no one has brought them together in easily accessible forms, and much about investment remains opaque.

Fourth, the GAIO would bring together global knowledge about the impacts of AI on particular fields through working groups covering topics such as labor markets, education, media and healthcare. These groups would gather data and organize interpretation and forecasting, for example on the potential effects of LLMs on jobs and skills, which is becoming a crucial question across many countries. The GAIO would aim to gather data on both the positive and negative impacts of AI, ranging from the economic value created by AI products to the potentially negative effects of AI-enabled social media impact on mental health and political polarization. 

Fifth, the GAIO could offer options for regulation and policy for national governments and perhaps also legislative assistance, providing model laws and rules that could be adapted to different contexts.

Lastly, the GAIO would orchestrate global debate through an annual report on the state of AI that analyzes key issues, patterns that arise and choices governments and international organizations need to consider. As with the IPCC, this could include a rolling program of predictions and scenarios, with a particular emphasis on technologies that could go live, or come to market, in the next few years, building on existing efforts such as the AI Index produced by Stanford University.

“Shared knowledge and analysis are surely the preconditions for nations to decide their own priorities.”

To do its work the GAIO would need to innovate, learning from examples like the IPCC and the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Service — but going further, including through the use of new collective intelligence methods to bring together inputs from thousands of scientists and citizens, which is essential in tracking emergent capabilities in a fast-moving and complex field. In addition, it could introduce whistleblowing methods similar to the U.S. government’s generous incentives for people to report on harmful or illegal actions.

To succeed, the GAIO would have to be legitimate, just as the IPCC has had to win legitimacy over the last few decades. Some of that legitimacy can come from the support of governments and some from the endorsement of scientists. But much of it will come from the quality of what it does and its ability to maintain a sharp focus on facts and analysis more than prescription, which would be left in the hands of governments. It would ideally also have formal links to other bodies that have a clear role in this space, like the International Telecommunication Union, the Institute of Electrical and Electronics Engineers, UNESCO and the International Science Council.

The AI community and businesses using AI tend to be suspicious of government involvement, often viewing it solely as a source of restrictions. But the age of self-governance is now over. What’s proposed here is an organization that would exist partly for governments but with the primary work done by scientists, drawing on successful attempts to govern many other technologies, from human fertilization and cloning to biological and nuclear weapons.

In recent years, the U.N. system has struggled to cope with the rising influence of digital technologies. It has created many committees and panels, often with grand titles, but generally with little effect. The greatest risk now is that there will be multiple unconnected efforts, none of which achieve sufficient traction. The media and politicians have been easily distracted by wild claims of existential risk, and few feel confident to challenge the major corporations, especially when they are threatened with cutting their citizens off from the benefits of OpenAI or Google. 

So, legitimating a new body will not be easy. The GAIO will need to convince key players from the U.S., China, the U.K., the EU and India, among others, that it will fill a vital gap, and will need to persuade the major businesses that their attempts at controlling the agenda, without any pooling of global knowledge and assessment, are unlikely to survive for long. The fundamental case for its creation is that no country will benefit from out-of-control AI, just as no country benefits from out-of-control pathogens.

How nations respond is bound to vary. China, for example, recently proposed a ban on LLMs with “any content that subverts state power, advocates the overthrow of the socialist system, incites splitting the country or undermines national unity.” The U.S. is likely to want maximum freedom. 

But shared knowledge and analysis is surely the precondition for nations to decide their own priorities. Unmanaged artificial intelligence threatens our ability to think, act and thrive, potentially making it impossible to distinguish truth from lies. Pooling knowledge in intelligent ways is the precondition for better harnessing the benefits of artificial intelligence and avoiding its many dangers.

With thanks to the group who collaborated with me on the first version of the GAIO proposal: Divya Siddarth and Saffron Huang from the Collective Intelligence Project; Thomas Malone at M.I.T.; Joshua Tan at the Metagovernance Project; and Lewis Hammond from Cooperative AI.

The post The World Needs A Global AI Observatory  appeared first on NOEMA.

]]>
Approaching The Infopocalypse https://www.noemamag.com/approaching-the-infopocalypse Fri, 23 Jun 2023 12:45:13 +0000 https://www.noemamag.com/approaching-the-infopocalypse The post Approaching The Infopocalypse appeared first on NOEMA.

]]>
Without institutions and practices that can establish and preserve the credibility of information, there is no solid ground for democratic discourse. What we will see instead is an “arms race of ploy and counterploy” in which the whole notion of objectivity is a casualty of the battle of truths, as Daniel Dennett, the philosopher of consciousness, has put it.

Indeed, we are already seeing all that is solid melting into information we no longer know if we can trust. As another philosopher, Byung-Chul Han, observed in an interview with Noema, democracy requires a common narrative of binding values, ideals and shared convictions. But “the informatization of reality leads to its atomization — separated spheres of what is thought to be true. … Bits of information provide neither meaning nor orientation. They do not congeal into a narrative. They are purely additive. From a certain point onward, they no longer inform — they deform.”

Today, he argues, democracy has given way to “infocracy” as peer-to-peer connectivity “redirects the flows of communication. Information is spread without forming a public sphere. It is produced in private spaces and distributed to private spaces. The web does not create a public.”

Writing in Noema, Renée DiResta of the Stanford Internet Observatory points out that the elite gatekeepers of yesterday’s mass media were often castigated for “manufacturing consent” among a “phantom” public by leaving too many voices out. What may be worse is that the structural fragmentation of today’s digital media ecosystem is manufacturing a level of dissensus detrimental to the possibility of arriving at consensually agreed truths necessary to hold any society together.

DiResta brilliantly exposes the dynamic behind this splintering at scale. She shows how the incentive for siloed social networks to monetize attention has empowered a new kind of distributed propaganda crafted to fit niche audiences living in their own reality.

Niche Propaganda Thrives On Distrust

“Propaganda,” says DiResta, “is information with an agenda, delivered to susceptible audiences to serve the objectives of the creator. Anyone so inclined can set up an account and target an audience, producing spin to fit a preferred ideological agenda. … Rather than persuading a mass audience to align with a nationally oriented hegemonic point of view … the niche propagandists activate and shape the perception of niche audiences. The propaganda of today entrenches fragmented publics in divergent factional realities, with increasingly little bridging the gaps.” 

As DiResta sees it, the new propagandists thrive on the trope of being excluded and persecuted to attract audiences of the alienated who believe they have “exited the Matrix” of the mainstream media, government and Big Tech conspiring to silence the people.

“Sustaining attention in a highly competitive market,” DiResta argues, “practically requires that niche propaganda be hyper-adversarial, as often as possible. The rhetorical style is easily recognizable: They are lying to you, while I have your best interests at heart.”

These “media-of-one,” she continues, “are incentivized to increase the fracturing of the public and perpetuate the crisis of trust, in order to ensure that their niche audience continues to pay them.” Their success has propelled Davids into Goliaths of influence that are eclipsing the old channels of information.

Infopocalypse

As these new niche propagandists overshadow any bridging media, Aviv Ovadya of Harvard’s Berkman Klein Center fears we are headed into “a catastrophic failure of the marketplace of ideas” with “no one believing anything or everyone believing lies.” He calls this “the infopocalypse.”

As Dennett and Han have understood, democracy cannot survive this failure of the marketplace of ideas because it disables the formation of any shared ground where competing propositions can be tested against each other in the full gaze of the body politic as a whole.

What can be done? The cat of distributed social networks is out of the bag and can never revert to a media ecosystem where custodians of perception edit out voices they don’t want to hear in order to manufacture phantom consent. The point of challenge must be where information meets the political space.

As we have written in Noema, new mediating institutions, such as citizens’ assemblies, that encourage and enable civil discourse and consensus formation at the same virtual scale as social networks, are more necessary than ever because the forces of fragmentation have never been greater. Mending the breach of distrust between the public and institutions of self-government in the digital age can only happen by absorbing the wired activation of civil society into governance through integrating connectivity with common platforms for deliberation. 

Just as republics have historically sustained themselves by creating countervailing institutions to check power when too much of it is concentrated in one place, so too such checks are needed in the digital age when power is so distributed that the public sphere itself is disempowered.

The post Approaching The Infopocalypse appeared first on NOEMA.

]]>
The New Media Goliaths https://www.noemamag.com/the-new-media-goliaths Thu, 01 Jun 2023 16:35:11 +0000 https://www.noemamag.com/the-new-media-goliaths The post The New Media Goliaths appeared first on NOEMA.

]]>
One of the more remarkable artifacts of late-stage social media is the indelible presence of a particular character: the persecution profiteer. They are nearly unavoidable on Twitter: massive accounts with hundreds of thousands to millions of followers, beloved by the recommendation engine and often heavily monetized across multiple platforms, where they rail against the corporate media, Big Tech and elites. Sometimes, the elites have supposedly silenced them; sometimes, they’ve supposedly oppressed you — perhaps both. But either way, manipulation is supposedly everywhere, and they are supposedly getting to the bottom of it. 

Many of these polemicists rely on a thinly veiled subtext: They are scrappy truth-tellers, citizen-journalist Davids, exposing the propaganda machine of the Goliaths. That subtext may have been true in last century’s media landscape, when independent media fought for audience scraps left by hardy media behemoths with unassailable gatekeeping power. But that all changed with the collapse of mass media’s revenue model and the rise of a new elite: the media-of-one. 

The transition was enabled by tech but realized by entrepreneurs. Platforms like Substack, Patreon and OnlyFans offered infrastructure and monetization services to a galaxy of independent creators — writers, podcasters and artists — while taking a cut of their revenue. Many of these creators adopted the mantle of media through self-declaration and branding, redefining the term and the industry. Many were very talented. More importantly, however, they understood that creating content for a niche — connecting with a very specific online audience segment — offered a path to attention, revenue and clout. In the context of political content in particular, the media-of-one creators offered their readers an editorial page, staffed with one voice and absent the rest of the newspaper. 

The rise of a profitable niche media ecosystem with a reach commensurate with mass media has been a boon for creators and consumers alike. YouTube, Instagram and TikTok have enabled sponsorships and ad-revenue sharing for quite some time — spawning a generation of influencers — but patronage opened additional paths to success. A tech blogger can start a podcast about Web3 with no infrastructural outlay, reaching their audience in a new medium. A Substack newsletter devoted to political history can amass thousands of subscribers, charge $5 a month, and deliver a salary up to seven figures for its author. Pop culture pundits can earn a living producing content on Patreon, and web-cam adult performers can do the same on OnlyFans. Even Twitter has launched subscriptions.

Whatever the kink — from nudes to recipes to conspiracy theories — consumers can find their niche, sponsor it and share its output. This ecosystem has given rise to people with millions of followers, who shape the culture and determine what the public talks about each day.  

Well, their public, anyway. 

The Rise Of Niche Propaganda

Like the media, the public has increasingly fragmented. The internet enabled the flourishing of a plethora of online subcultures and communities: an archipelago of bespoke and targetable realities. Some of the most visible are defined by their declining trust in mass media and institutions. Recognizing the opportunity, a proliferation of media-of-one outlets have spun up to serve them.

In fact, the intersection of a burgeoning niche media ecosystem and a factionalized public has transformed precisely the type of content that so concerns the persecution profiteers: propaganda. Propaganda is information with an agenda, delivered to susceptible audiences to serve the objectives of the creator. Anyone so inclined can set up an account and target an audience, producing spin to fit a preferred ideological agenda. Those who achieve a degree of success are often increasingly cozy with politicians and billionaire elites who hold the levers of power and help advance shared agendas. In fact, the niche propagandists increasingly have an advantage over the Goliaths they rail against. They innately understand the modern communication ecosystem on which narratives travel and know how to leverage highly participatory, activist social media fandoms to distribute their messages; institutions and legacy media typically do not. 

Although the mechanics of who can spread propaganda, and how, has shifted significantly over the last two decades, public perception of the phenomenon has not. People discussing concerns about propaganda on social media frequently reference the idea of a powerful cabal composed of government, media and institutional authorities, manipulating the public into acquiescing to an elite-driven agenda. This misperception comes in large part from popular understanding of a theory presented by Noam Chomsky and Edward Herman in their 1988 book, “Manufacturing Consent: The Political Economy of the Mass Media.” 

“Manufacturing Consent” proposed a rather insidious process by which even a free press, such as that of the United States, filters the information that reaches the public by way of self-censorship and selective framing. Even without the overt state control of media present in authoritarian regimes, Chomsky and Herman argued, American media elites are influenced by access, power and money as they decide what is newsworthy — and thus, determine what reaches the public. Chomsky and Herman identified five factors, “five filters” — ownership, advertising, sourcing, catching flak, and fear — that comprised a system of incentives that shaped media output. 

Media “ownership” (the first filter) was expensive, requiring licenses and distribution technology — and so, the ecosystem was owned by a small cadre of the wealthy who often had other financial and political interests that colored coverage. Second, advertising meant that media was funded by ad dollars, which incentivized it to attract mainstream audiences that advertisers wanted and to avoid topics — say, critiques of the pharmaceutical industry — that might alienate them. Third, “sourcing” — picking experts to feature — let media elevate some perspectives while gatekeeping others. Fourth, fear of catching “flak” motivated outlets to avoid diverging from approved narratives, which might spark lawsuits or boycotts. And finally, “fear” highlighted the media’s capacity to cast people in the role of “worthy” or “unworthy” victims based on ideology. 

Throughout the 20th century, Chomsky and Herman argued, these incentives converged to create a hegemonic media that presented a filtered picture of reality. Media’s self-interest directly conflicted with the public interest — a problem for a democratic society that relied on the media to become informed. 

But legacy media is now only half the story, and the Goliaths are no longer so neatly distinguished. Technology reduced costs and eliminated license requirements, while platform users themselves became distributors via the Like and Share buttons. Personalized ad targeting enabled inclined individuals to amass large yet niche audiences who shared their viewpoints. The new elites, many of whom have millions of followers, are equally capable of “manufacturing consent,” masquerading as virtuous truth-tellers even as they, too, filter their output in accordance with their incentives.

However, something significant has changed: Rather than persuading a mass audience to align with a nationally oriented hegemonic point of view — Chomsky’s concern in the 1980s — the niche propagandists activate and shape the perception of niche audiences. The propaganda of today entrenches fragmented publics in divergent factional realities, with increasingly little bridging the gaps. 

“Positioning of niche media as a de facto wholesome antithesis to the ‘mainstream propaganda machine’ — Davids fighting Goliaths — is a marketing ploy.”

From Five Filters To Four Fire Emojis

As technology evolved and media and the public splintered, the five filters mutated. A different system of incentives drives the niche media Goliaths — we might call it the “four fire emoji” model of propaganda, in homage to Substack’s description of criteria it used to identify writers most likely to find success on its platform. 🔥🔥🔥🔥

In its early days of operation, Substack, which takes 10% of each subscription, reached out to media personalities and writers from traditional outlets, offering them an advance to start independent newsletters. To assess who might be a good investment, the company ranked writers from one to four fire emojis, depending on their social media engagement. Someone with a large, highly engaged following was more likely to parlay that attention into success on Substack. There is no algorithmic curation or ads; each post by the author of a newsletter is sent to the inbox of all subscribers. Substack describes their platform as a “new economic engine for culture,” arguing that authors might be less motivated to replicate the polarization of social media if they are paid directly for their work.

But the four fire emoji rubric inadvertently lays bare the existential drive of niche media: the need to capture attention above all else, as technology has driven the barrier to entry toward zero and the market is flooded with strivers. Getting attention on social media often involves seizing attention, through sensationalism and moral outrage. Niche media must convert that attention into patronage. A passionate and loyal fandom is critical to success because the audience facilitates virality, which delivers further attention, which can be parlayed into clout and money.

There is little incentive to appeal to everyone. In a world where attention is scarce, the political media-of-one entrepreneurs, in particular, are incentivized to filter what they cover and to present their thoughts in a way that galvanizes the support of those who will boost them — humans and algorithms alike. They are incentivized to divide the world into worthy and unworthy victims. 

In other words, they are incentivized to become propagandists. And many have. 

“It seems likely that at least some of the audience believes that they have escaped propaganda and exited the Matrix, without realizing that they are simply marinating in a different flavor.”

Consider a remarkable viral story from January 2023. Right-wing commentator Steven Crowder published a video accusing a major conservative news outlet (later revealed to be The Daily Wire) of offering him a repressive contract — a “slave contract,” as he put it, that would penalize him if the content he produced was deemed ineligible to monetize by major platforms like YouTube. “I believe that many of those in charge in the right-leaning media are actually at odds with what’s best for you,” he told his nearly 6 million YouTube subscribers. Audiences following along on Twitter assigned the scandal a hashtag: #BigCon. 

Underlying the drama was classic subtext: Crowder, the David, pitted against conservative media Goliaths. And yet, the contract Crowder derided as indentured servitude would have paid him $50 million

Sustaining attention in a highly competitive market practically requires that niche propaganda be hyper-adversarial, as often as possible. The rhetorical style is easily recognizable: They are lying to you, while I have your best interests at heart. 

As it turns out, perpetual aggrievement at elites and the corporate profiteering media can be quite lucrative. On Substack, pseudoscience peddler Joseph Mercola touts his “Censored Library” to tens of thousands of paid subscribers at $5/month, revealing “must-read information” that the medical establishment purportedly hides from the public. Several prominent vaccine skeptics — who regularly post about how censored they are — are also high on the Substack leaderboard and in the tens-of-thousands-of-paid-subscribers club.

Matt Taibbi, a longtime journalist who’s also a lead Substack writer, devotes many posts to exposing imaginary cabals for an audience that grew significantly after billionaire Elon Musk gave him access to company emails and other internal documents. His successful newsletter solicited additional contributors: “Freelancers Wanted: Help Knock Out the Mainstream Propaganda Machine.” The patrons of particular bespoke realities reward the writers with page views and subscriber dollars; prominent members of political parties cite the work or move it through the broader partisan media ecosystem.

“The manufacture of consent is thriving within each niche.”

It is an objectively good thing that the five filter model is increasingly obsolete. Reducing the barriers to ownership, in particular, enabled millions of voices to enter the market and speak to the public, and that is an unambiguously good thing. But the positioning of niche media as a de facto wholesome antithesis to the “mainstream propaganda machine” — Davids fighting Goliaths — is a marketing ploy. The four fire emoji model simply incentivizes a more factional, niche propaganda. 

Since the model relies on patronage, rather than advertising, the new propagandists are incentivized to tell their audiences what they want to hear. They are incentivized to increase the fracturing of the public and perpetuate the crisis of trust, in order to ensure that their niche audience continues to pay them, rather than one of their nearest neighbors (or, God forbid, a mainstream outlet). Subscribers don’t have unlimited funds; they will pick a handful of creators to support, and the rest will struggle. 

As attention and trust have fragmented, “sourcing” has also reoriented to ensure that writers feature people who are approved within the bespoke reality they target; for example, there are several different universes of COVID experts at this point. “Flak” is now a veritable gift: Rather than being afraid of it, the patronage propagandists are incentivized to court it. Attack from ideological outsiders are a boon: “Subscribe to help us fight back!” So much of the media-of-one content is defined by what it is in opposition to — otherwise, it loses the interest of its audience. Partisan outlets have long played the fear game, as Chomsky pointed out in the 1980s, encouraging hatred of the other side — but now, the “unworthy victim” is your neighbor, who may have only moderately different political leanings.

The Effect: Lost Consensus, Endless Hostility

The devolution of propaganda into niches has deep and troubling implications for democratic society and social cohesion. It was Walter Lippmann, a journalist and early scholar of propaganda, who coined the phrase “the manufacture of consent” of the governed in 1922, using it to describe a process by which leaders and experts worked alongside media to inform the public about topics they did not have the time or capacity to understand. The premise was paternalistic at best.

However, Lippmann also had reservations about the extent to which “the public” existed; the idea of an omnicompetent, informed citizenry powering functional democracy was an illusion, he believed, and the “public” a phantom. People, Lippmann wrote, “live in the same world, but they think and feel in different ones.” Propaganda was manipulative, even damaging and sinister, Lippmann thought, but he also believed that the manufacture of consent was to some extent necessary for democratic governance, in order to bridge divides that might otherwise render democracy dysfunctional. 

Lippmann’s intellectual rival on the topics of propaganda, the public and democracy was the eminent philosopher John Dewey. Unlike Lippmann, Dewey believed “the public” did exist. It was complicated, it was chaotic — but it was no phantom. Dewey also rightly bristled at the idea of a chosen few wielding propaganda to shape public opinion; he saw it as an affront to true democracy. Instead, Dewey saw the press — when operating at its best — as a tool for informing and connecting the public, enabling people to construct a shared reality together.       

Though at odds in many respects, both Lippmann and Dewey acknowledged the challenges of a fractured public. The two men saw a dissonant public as both a natural state and as a barrier to a functioning, safe and prosperous society. Though they differed greatly in their proposed approaches, they agreed on the need to create harmony from that dissonance.     

One hundred years later, both approaches seem like an impossibility. It is unclear what entities, or media, can bridge a fragmented, polarized, distrustful public. The incentives are driving niche media in the opposite direction.

“Perhaps by highlighting the new incentives that shape the media-of-one ecosystem, we may reduce the public’s susceptibility to the propaganda it produces.”

The propagandists of today are not incentivized to create the overarching hegemonic national narrative that Chomsky and Herman feared. Rather, their incentives drive them to reinforce their faction’s beliefs, often at the expense of others. Delegitimization of outside voices is a core component of their messaging: The “mainstream” media is in cahoots with the government and Big Tech to silence the people, while the media-of-one are independent free-thinkers, a disadvantaged political subclass finally given access to a megaphone … though in many cases, they have larger audiences and far larger incomes. It seems likely that at least some of the audience believes that they have escaped propaganda and exited the Matrix, without realizing that they are simply marinating in a different flavor.

We should not glorify the era of a consolidated handful of media properties translating respectable institutional thinking for the masses — consolidated narrative control enables lies and deception. But rather than entering an age of “global public squares” full of deliberative discourse and constructive conversation, we now have gladiatorial arenas in which participants in niche realities do battle. Our increasingly prominent medias-of-one can’t risk losing the attention game in the weeds of nuance. We have a proliferation of irreconcilable understandings of the world and no way of bridging them. The internet didn’t eliminate the human predilection for authority figures or informed interpretations of facts and narratives — it just democratized the ability to position oneself in the role. The manufacture of consent is thriving within each niche. 

“Manufacturing Consent” ended with an optimistic take: that what was then a burgeoning cable media ecosystem would lead to more channels with varying perspectives, a recognition that truly independent and non-corporate media does exist and that it would find ways to be heard. But Chomsky and Herman also cautioned that if the public wants a news media that serves its interests rather than the interests of the powerful, it must go find it. Propaganda systems are demonstrably effective precisely because breaking free of such a filtered lens requires work. Perhaps by articulating to today’s public how the system has shifted and highlighting the new incentives that shape the media-of-one ecosystem, we may reduce the public’s susceptibility to the propaganda it produces.

The illustration above was first published in FORESIGHT Climate & Energy’s Efficiency issue.

The post The New Media Goliaths appeared first on NOEMA.

]]>
What AI Means For Animals https://www.noemamag.com/what-ai-means-for-animals Tue, 18 Apr 2023 18:04:08 +0000 https://www.noemamag.com/what-ai-means-for-animals The post What AI Means For Animals appeared first on NOEMA.

]]>
Credits

Peter Singer, a professor of bioethics at Princeton University, was the 2021 recipient of the Berggruen Prize for Philosophy and Culture. His books include “Animal Liberation,” “Practical Ethics” and “The Life You Can Save.”

Tse Yip Fai has been the China strategy consultant for Mercy For Animals and is currently a research assistant for Singer’s project on the ethics of artificial intelligence concerning nonhuman animals.

The ethics of artificial intelligence has attracted considerable attention, and for good reason. But the ethical implications of AI for billions of nonhuman animals are not often discussed. Given the severe impacts some AI systems have on huge numbers of animals, this lack of attention is deeply troubling.

As more and more AI systems are deployed, they are beginning to directly impact animals in factory farms, zoos, pet care and through drones that target animals. AI also has indirect impacts on animals, both good and bad — it can be used to replace some animal experiments, for example, or to decode animal “languages.” AI can also propagate speciesist biases — try searching “chicken” on any search engine and see if you get more pictures of living chickens or dead ones. While all of these impacts need ethical assessment, the area in which AI has by far the most significant impact on animals is factory farming. The use of AI in factory farms will, in the long run, increase the already huge number of animals who suffer in terrible conditions.

AI systems in factory farms can monitor animals’ body temperature, weight and growth rates and detect parasites, ulcers and injuries. Machine learning models can be created to see how physical parameters relate to rates of growth, disease, mortality and — the ultimate criterion — profitability. The systems can then prescribe treatments for diseases or vary the quantity of food provided. In some cases, they can use their connected physical components to act directly on the animals, emitting sounds to interact with them — giving them electric shocks (when the grazing animal reaches the boundary of the desired area, for example), marking and tagging their bodies or catching and separating them.

You might be thinking that this would benefit the animals — that it means they will get sick less often, and when they do get sick, the problems will be quickly identified and cured, with less room for human error. But the short-term animal welfare benefits brought about by AI are, in our view, clearly outweighed by other consequences. When diseases are less frequent, more predictable and more controllable, factory farms can crowd more animals into confined spaces, thus increasing their profits. In fact, several AI companies openly advertise being able to pack more animals into a given space as a benefit of installing their systems.

“Factory farming is morally indefensible and needs to be replaced as soon as possible by less cruel and more efficient ways of producing food — not supported further by AI.”

In addition, when factory farms become more efficient, the price of animal products falls, leading to more demand for meat and more animals raised in factory farms. This will also make it harder for plant-based analogues and cultivated meat — meat produced from animal cells grown in bioreactors — to eventually replace factory farmed products, which could increase the scale and extend the lifetime of factory farming. This is a moral atrocity because of what it does to animals, quite apart from its disastrous consequences for the environment.

Each year, factory farming brings into existence, rears and kills more than 70 billion birds and mammals and nearly 100 billion finfish. These animals live in crowded conditions without any consideration for their welfare beyond what is essential for profitability. Male piglets often have their testicles ripped out and their sensitive tails cut off without anesthesia. To prevent the dominant chickens pecking the weaker birds to death in unnaturally crowded conditions, the ends of their beaks — a sensitive part of their anatomy, filled with nerve endings — are often cut off with a hot blade, without anesthesia. Male chicks of the laying breeds have no commercial value and are ground up or suffocated immediately after birth.

We could go on. But clearly, factory farming is morally indefensible and needs to be replaced as soon as possible by less cruel and more efficient ways of producing food — not supported further by AI.

History Repeating Itself

We can better understand this dynamic by looking at an earlier farming innovation that was touted as benefiting animals but ultimately enabled their mistreatment. In the 1930s, Bayer advertised an antibacterial called Prontosil to livestock producers in Britain, and their early success attracted other companies and countries to follow suit. In 1948, Merck & Co. obtained an official license in the U.S. to add sulfaquinoxaline to poultry feed, and the practice of lacing feed with antibiotics soon spread to Japan, China and the Soviet Union.

Antimicrobial drugs (AMDs) quickly became popular due to their ability not just to cure diseases, but to prevent them as well. The preventative use of AMDs lowered the fatality rate of farmed animals and reduced costs and risks for animal-raising operations. They were even found to make animals grow faster — some suggested that the drugs reduced the total amount of energy the animals used for fighting pathogens.

It might have seemed that these drugs helped both humans and animals. But some producers realized that the drugs made it possible to raise more animals in the same amount of space. “Historically, contagious diseases had limited how many animals could be held in any single flock or herd,” Wessels, a farm that’s been around since the 1920s, reports on its website. “Too many animals, and the entire group could be wiped out by disease. Antibiotics changed all of that. Quickly, the size of poultry, swine, dairy and cattle feedlots rose dramatically and per-unit production costs fell.” This allowed animals to be kept in ever more crowded and filthy situations, unsuited for their social and behavioral needs. High profits drove more farms to adopt these practices, and the use of AMDs is now standard in factory farming across the globe.

It is fair to say that AMDs were a big part of the reason humans shifted from small-scale animal raising operations to factory farming. That negative consequence outweighed, by far, all the short-term welfare benefits AMDs had initially provided for the animals. In a similar way, the involvement of AI in factory farming is likely not only to make factory farming worse for animals but also to make it harder to replace with less cruel, more environmentally friendly, safer ways of producing protein. To adapt the Spanish philosopher George Santayana’s famous saying: Those who cannot remember our past mistreatment of animals are condemned to repeat it.

Noah Campeau for Noema Magazine
A Modern Case Study

China’s fish farming industry, to take one example, is primed to be revolutionized by AI. The world’s first treatise on how to raise fish in non-natural systems, Master Taozhu’s “Book of Fish Raising” ( 養 魚 經 ), was written in China more than 2,400 years ago. It discusses the raising of carp, covering details such as pond design, breeding methods and expected profits. It was so forward-looking — in the technological sense, not the ethical — that the book still describes the basic principles of how most people in China, and perhaps in other developing Asian countries, raise their fish.

The ongoing use of older technologies leaves fish farms such as these vulnerable to fluctuating temperatures, poor water quality and diseases. Until recently, even if the operators invested in and know how to use modern monitoring equipment, they still needed to check it frequently, day and night. And people within the aquaculture industry in Asia frequently cite the difficulty of passing along their techniques to the next generation. As a result, those who raise fish often face high mortality rates in their ponds.

AI would change all that. The equipment, which may be connected to a central computer or cloud server, can train image recognition algorithms with machine-learning techniques to identify parasites, monitor the health of the fish and whether they are under- or overfed and automatically stop, decrease or increase feeding. Diseases can not only be identified but even predicted, reducing human error and saving the operators time. Moreover, AI can also address the issue of passing on expertise by learning from human operators who are proven to outperform others in the industry and by capturing as data what these outperformers do but cannot explain verbally.

However, these AI improvements aren’t going to change the basic business model of factory farming: pack in as many fish as possible, with as little food as possible, with no regard for their wellbeing. These AI improvements won’t prevent farmers from killing fish in a torturous way — cutting, descaling and skinning them while they are still alive, which is common practice. Instead, the AI simply increases their efficiency and profits.

An Ethical Approach

What, then, should the AI industry do? First, AI companies, scientists and institutions should refrain from supporting or participating in AI projects that directly help the factory farming industry. Instead, they might consider opportunities to use AI to assist the development of plant-based meat and cultivated meat, both of which have huge potential to reduce or even replace the demand for factory farmed products and both of which are already benefiting from AI. 

For example, NotCo, a plant-based alternative company, uses machine-learning natural language processing to find combinations of plant ingredients that are likely to match the tastes of animal products. Equinom, a foodtech company that only serves plant-based products, uses AI to find the right variety of seed to grow the crops used for plant-based products, so that they will have the desired taste, texture and nutrition profile. On the cultivated meat side, Animal Alternative Technologies uses AI to model how parameters such as nutrients, flow rates, pH and temperature affect efficiency and production costs.

“The AI industry will hire exactly zero employees who are nonhuman animals, so they will have no representatives if humans do not stand up to represent them.”

Some may object that we cannot expect AI companies to forego the economic benefits of involvement in a large and entirely legal industry — but it is not clear that this involvement will benefit those companies in the long run. Factory farming already has a negative image in many countries, and both the animal movement and the environmental movement are strongly opposed to it. Many wealthy investors — and some governments — are putting substantial sums into trying to replace factory farming. Temasek, which is 100% owned by the Singaporean government, invested in plant-based companies and a platform to support alternative proteins. They also invested in Upside Foods, a cultivated company, together with the government-owned Abu Dhabi Growth Fund. The Danish government has said that it will invest over $100 million into a new Plantefonden, or Plant Fund. Some famous investors are putting their money into alternatives to meat, too, such as Li Ka-shing, Bill Gates, Prince Khaled bin Alwaleed Al Saud and Natalie Portman.

Plant-based and cultivated meats will not only be more ethical, more sustainable, more environmentally friendly and better for public health — they also have the potential to be more economical. And if factory farming is, like the tobacco and coal industries, an industry “against the tide of history,” involvement with it could hurt the image of large tech companies and reduce their other business opportunities. 

Concerns about the ethics of AI have already given rise to initiatives with names like “AI for humanity,” “AI for humans” and “AI for social good.” We would love to see decision-makers expand this thinking to be more inclusive, with initiatives such as “AI for animals” or, better still, “AI for all sentient beings.” More groups should follow the lead of the Montreal Declaration for a Responsible Development of AI, which states: “The development and use of artificial intelligence systems must permit the growth of the well-being of all sentient beings.”

There is an urgent need to expand AI ethics so that it considers nonhuman life. And it is not only AI companies and AI scientists who are responsible for what is happening. There are several other human stakeholders, including philosophers working in AI ethics, NGOs, policymakers and lawmakers. Consumers should also understand and take responsibility for the consequences of their consumption choices.

Animals, unlike most humans, cannot participate in the design of AI. They cannot have their own social movements or tell us that a particular AI is harming them, and they will never design AI to benefit themselves. The AI industry will hire exactly zero employees who are nonhuman animals, so they will have no representatives if humans do not stand up to represent them. We need to do it — it’s on us all.

The post What AI Means For Animals appeared first on NOEMA.

]]>