Karen Bakker, Author at NOEMA https://www.noemamag.com/author/karenbakker/ Noema Magazine Mon, 10 Jul 2023 17:17:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.3 https://www.noemamag.com/wp-content/uploads/2020/06/cropped-ms-icon-310x310-1-32x32.png Karen Bakker, Author at NOEMA https://www.noemamag.com/author/karenbakker/ 32 32 The Sounds Of Invisible Worlds https://www.noemamag.com/the-sounds-of-invisible-worlds Tue, 20 Jun 2023 18:50:04 +0000 https://www.noemamag.com/the-sounds-of-invisible-worlds The post The Sounds Of Invisible Worlds appeared first on NOEMA.

]]>
More than 400 years ago in the small Dutch town of Middelburg, a father-and-son team stumbled on an invention that would one day change history, but which they dismissed as a dud. By tinkering with glass lenses, Hans and Zacharias Janssen invented the microscope. Yet this was not by design.

The Janssens were leaders in a new and highly lucrative industry: making reading glasses. In their quest for the perfect pair of spectacles, a highly sought-after luxury item, the Janssens discovered that they could magnify objects by aligning two lenses in a cylindrical tube. They were astounded to find that combining two lenses magnifies much more than any one lens does on its own. But the view was blurry and the device too clunky for their clients, so they put their quirky discovery aside.

The Janssens’ magnifying machine lay mostly unnoticed for nearly a hundred years before someone put it to use. Antonie van Leeuwenhoek, a Dutch fabric merchant with a grade school education, first built a few homemade microscopes with a mundane goal: checking the quality of the expensive fabrics he had purchased from overseas. But Van Leeuwenhoek soon turned his attention to the world around him, pointing his microscope at well water, mold, bees, lice, yeast, blood cells, human breast milk (his wife’s) and sperm (his own). Everywhere he looked, his microscopes revealed a strange new world of beings living in every nook and cranny of our world, but unseen by the unaided eye.

Van Leeuwenhoek initially (and wisely) kept his discoveries secret for fear of ridicule. When he eventually revealed what he had seen, polite Dutch society disdained his strange proclivity for magnifying bodily substances; many simply refused to believe in the existence of “animalcules” altogether. Nevertheless, Van Leeuwenhoek penned hundreds of letters to the Royal Society in London, which — after initial suspicion and a visit from a skeptical scientific delegation — eventually accepted his findings. The humble merchant’s research papers were published alongside those of Sir Isaac Newton in the Royal Society’s learned journal.

The new world of microscopic observation soon fascinated scientists and philosophers alike. Microscopes proliferated as a form of visual prosthetics — artificial eyes that helped humanity see new things in new ways, laying the foundation for startling discoveries. The study of the microscopic world renewed interest in atomism — an early theory that the world was composed of fundamental, tiny particles — and would also eventually provide new methods for understanding contagion and disease.

As the historian of science Catherine Wilson argued in “The Invisible World,” the microscope catalyzed the Scientific Revolution. The simple device captured a complex idea: Science could reveal aspects of the natural world that were invisible to naked human perception. Spectacles merely helped us focus on the written word; the microscope enabled humans to perceive entirely new realms, extending the power of both sight and imagination.

“The microscope enabled humans to perceive entirely new realms, extending the power of both sight and imagination.”

At around the same time, another Dutch spectacle-maker was also tinkering with glass and realized that distant objects could be magnified by arranging convex and concave lenses. News quickly spread across Europe; in Italy, Galileo Galilei, then a university lecturer in mathematics, tweaked the design and turned his telescope to the stars.

Most of his contemporaries were using telescopes for military purposes — spying on enemies on land and at sea. Within a year, Galileo published descriptions of the sun, moon, stars and planets in “Sidereus Nuncius” (“Starry Messenger”), which became one of the most widely circulated scientific tracts of the age. Despite persecution by the Catholic Church, Galileo’s discoveries sparked the abandonment of the idea that the Earth was at the center of the universe and laid the groundwork for profound challenges to the foundations of science, philosophy and politics.

In Europe, the science of optics had deep cultural roots: In classical Greek philosophy, sight was the noblest of senses, and philosophers from Plato to St. Augustine exulted in visual imagery. Even basic scientific terms in common use today reflect a preference for vision and the visible: The word “theory” derives from the Greek theoreo (“to see”), and the name later given for this era of reason and scientific triumph — the Age of Enlightenment — is a visual metaphor of light overcoming shadows.

For the Scientific Revolution, optics offered both instrumentation and insights, both machines and metaphors. As Claire Webb has argued, telescopes mediated parallel revolutions in science and philosophy from the 16th century to the present day, and they continue to reform our understanding of the universe and our sense of being in the world.

Marshall McLuhan argued years ago that the cultural, political and scientific revolutions that took place in the West were spurred not only by optics — the microscope and telescope — but also by an equally important technology: the printing press. Invented in the mid-15th century, the printing press with movable type enabled the rapid spread of print media and the standardized and automated cultural production of knowledge. As McLuhan noted, the printing press changed human behaviors and cultural habits, and also our perceptual patterns. Oral traditions receded; visual culture became ascendant. As the written word permeated our lifeworld, the importance of the spoken word — and the use of hearing as a method for exploring and understanding the world — dwindled.


Listening To The World

Senses that are not cultivated tend to atrophy. Ethnographers have long commented on the seeming deafness of Western peoples — raised in a culture obsessed with vision and the written word, whose sense of hearing is less developed than peoples of other cultures.

The Brazilian anthropologist Rafael José de Menezes Bastos wrote a few years ago about traveling with the Kamayurá, an Indigenous community in the Amazon rainforest. One evening, crossing Lake Ipavu in northeast Brazil by canoe, his friend Ekwa stopped rowing and went silent. When Bastos asked why they had stopped, Ekwa responded: “Can’t you hear the fish singing?” Bastos heard nothing. “Back in the village,” he wrote later, “I concluded that Ekwa had experienced some kind of hallucination, a fit of poetic inspiration or holy ecstasy, the whole event just a flight of imagination.” 

Years later, Bastos went to a bioacoustics workshop organized by scientists at the University of Santa Catarina, where he heard the sound of fish songs. Suddenly, Bastos realized, Ekwa “appeared more like a diligent ichthyologist than an inspired poet, a victim of hallucination or holy rapture.” Bastos’ ears had been closed, but Ekwa’s ears were open. Even Kamayurá children, Bastos wrote, were able to hear the sounds of planes and boats arriving well before he could.

In the Kamayurá language, the word anup (“to hear”) also evokes “to comprehend,” in a manner superior to the word tsak (“to see”), which evokes “to understand” only in a narrow analytic sense. Over-reliance on vision alone is associated with anti-social behavior, but hearing well is associated with holistic, integrated forms of perception and knowledge.

“By privileging vision over hearing, we stopped learning how to listen.”

Good listeners among the Kamayurá are often those with virtuosity in music and the verbal arts. A special accolade — maraka´ùp (“master of music”) — is given to those who, like Ekwa, are able to sense, remember, reproduce and relate the sounds of other beings. This ability arises from both innate talent and intensive training over a lifetime.

The Kamayurá’s interpretive abilities are equal to and even in some cases surpass Western scientific understanding: They are able to draw precise and accurate inferences of which species or objects make certain noises, as well as where and why. This is practical and relational knowledge because the Kamayurá are in constant dialogue with the nonhuman life around them. They move through the forest listening and conversing with animals, plants and spirits, telling them that they mean no harm and asking in return to remain unharmed. Bastos argues that this acoustic-musical “world listening” — which combines the precision of Western science with practices of spiritual attunement — is a form of sacred ecology.

The ancestors of modern Westerners may once have possessed this ability, but we ceased to cultivate it many generations ago. By privileging vision over hearing, we stopped learning how to listen. But in the past decade, new generations of scientists have begun exploring the neglected world of sound — from the cosmos to individual cells — leading to some remarkable discoveries that, like the microscopists in centuries past, reveal hidden and unsuspected worlds.


Sounds Of The Cosmos

For millennia, the movements of the celestial bodies in the heavens above, from shining planets to the faintest of stars, have provided practical guidance to navigators and spiritual direction to oracles. But some signals created by the stars are invisible to the naked human eye.

Astrophysicists have developed techniques to convert data from light signals into digital audio, using pitch, duration and other properties of sound. Wanda Díaz-Merced, a blind astronomer, sonifies plasma patterns in the upper reaches of the Earth’s atmosphere and invented new methods to detect subtle signals in the presence of visual noise.

But the conversion of data into acoustic signals — called sonification — is now being used by scientists who are not vision-impaired because listening to the stars helps detect patterns that may be missed by visual representations. Sonification has led to several surprising discoveries, including the presence of lightning on Saturn and the ubiquity of micrometeoroids smashing into spacecraft.

“Listening to the stars helps detect patterns that may be missed by visual representations.”

Even the origins of the universe can be sonified. Shortly after the Big Bang, giant waves traveled through the dense, hot matter that comprised the early universe. These waves compressed (and thus heated) certain regions while stretching (and thus cooling) others. The resulting temperature variations can still be detected today as temperature variations in the cosmic microwave background radiation — like echoes of the original shockwaves of the Big Bang.

The cosmic microwave background radiation (also called “relic radiation”) occurs at a frequency that we cannot detect with our eyes. (Humans can typically perceive light from approximately 380 to 750 nanometers in wavelength, but the electromagnetic spectrum extends well beyond this range. Cosmic background radiation is everywhere around us, but invisible to the naked eye.) When John Cramer, an astrophysicist at the University of Washington, converted these signals into sound, they resounded with a loud hum that he described as “rather like a large jet plane flying 100 feet over your house.”

In addition to scrutinizing graphs and charts, scientists are once again listening to the music of the stars. The ancients would be pleased.


Hearing The Tree Of Life

The science of bioacoustics similarly opens up a novel window into worlds of sound unheard by human ears. Across our planet, sound is a primordial form of conveying complex ecological information; a vast range of species — even those without ears — are remarkably sensitive to sound.

In the past two decades, scientists and amateurs have begun using digital recorders to record the sounds of life from the Arctic to the Amazon. Much of what they record is inaudible to humans, above or below our hearing range. The science of digital bioacoustics can thus be compared to the early days of optics: Digital recorders, like microscopes or telescopes, extend the human sense of hearing beyond the perceptual limits of our bodies.

The painstaking work of bio-acousticians has revealed that many more species than we previously realized actually make noise. Moreover, we are realizing that many species that are vocally active are capable of conveying complex information through acoustic communication.

A good example is elephant infrasound. Elephants emit powerful, very low sound waves (well below human hearing range) that travel long distances through both forest and savannah and help herds and families coordinate behavior across vast expanses of terrain. Even more surprising are the specific signals and sounds that elephants convey for certain situations, which scientists have compiled into a dictionary with thousands of sounds. African elephants, for example, have a specific signal for honeybees. They are keen listeners too, able to distinguish between humans from tribes that hunt them and those that don’t merely by listening to their voices and discerning their dialects.

“Sound is a primordial form of conveying complex ecological information; a vast range of species — even those without ears — are remarkably sensitive to sound.”

Even some voiceless creatures, it turns out, are exquisitely attuned to sound. Steve Simpson, a biologist at the University of Exeter, has demonstrated that coral larvae are able to differentiate between the sounds of healthy and degraded reefs (they prefer the former), and even between a random reef and their home reef (they prefer the latter). Yossi Yovel, a neuro-ecologist at the University of Tel Aviv, has found that flowers will respond to the sound of buzzing bees by flooding themselves with more and sweeter nectar within minutes. Even animals in the soil make noises: an underground Twitter.

The world is alive with nature’s sounds, most of which are undetectable by the naked human ear. But by combining digital bioacoustics with artificial intelligence, scientists have begun unveiling the extent of interspecies communication across the Tree of Life. Yovel’s team has trained an AI algorithm to detect minute changes in the high ultrasound emitted by plants; in one experiment with tobacco plants, the algorithm was able to detect whether the plants were dehydrated, healthy or wounded simply by listening. These high ultrasonic frequencies are well above human hearing range but audible to insects.

Driven by the realization that nature is full of interspecies communication, interdisciplinary research teams — computer scientists, biologists, linguists — are now attempting to use AI and digital bioacoustics to develop tools for translation. This raises profound ethical issues. When do we have the right to gather nonhuman acoustic data? Who should have access to that data? Hunters? Fishers? And what are the risks or potential harms, given the biases inevitably embedded in AI algorithms?

Equally, this raises profound philosophical questions. Does the existence of complex communication in other species challenge the supposed uniqueness of language as a solely human capacity? And do these discoveries create new possibilities for enabling a political voice for nonhumans, who might influence environmental governance or, more broadly, human conduct?


Sonifying Cells

If sound is universal and primordial in the cosmos, life might somehow be attuned to acoustic communication, and living phenomena — from diseases to brain activity to symbiotic relationships — could be translated and understood through acoustics.

Scientists are sonifying data in innovative ways — for example, biophysicists have used sonification as a teaching tool about protein folding. In medicine, neuroscientists are using sound to hone diagnoses of Alzheimer’s, and sonification has been used to assist doctors in detecting barely visible details of subtle yet important variations in heart rate and electrocardiogram signals. Sonification techniques have also been used by doctors looking for irregularities in brain waves to detect signals of cognitive impairment that might otherwise go undetected, and to find early signs of epileptic seizures in children.

The benefits go beyond the ability to detect small signs that might otherwise be overlooked. Our field of vision is limited to approximately 180 degrees, and the eyes can close. But we can hear sounds from 360 degrees, and the ears are always listening. Visual data requires the attentive gaze of a healthcare provider. But audio data can be projected, calling for rapid attention. Sonification is both subtle and insistently immediate.

In addition to its diagnostic utility, sound is important for our health in another important way: It is regenerative. For example, music benefits older adults with dementia, a well-documented phenomenon that illustrates how our neurons respond to songs and rhythms long after memory and cognition fade. And the mammalian responsiveness to sound goes well beyond mere relaxation. A team of medical scientists based in St. Louis recently completed a study where they targeted low-frequency sound waves at specific parts of the brains of mice, inducing them to a hibernation-like state. They also recreated the same effect in rats — animals that don’t normally hibernate.

Since most mammals naturally hibernate, and since most — including humans — share similar brain structures, does this mean that humans might possess a latent capacity for hibernation? Perhaps one day, sound could be a useful tool to keep space voyagers in a state of suspended animation as they travel between worlds.

“Sonic data and technologies expand our perceptual and conceptual horizons.”

The Sonics Revolution

Centuries ago, before the invention of the microscope, no one had any idea that there was a microbial world full of strange life. Before the invention of the telescope, humanity was able to perceive only the smallest glimpses of celestial bodies. No one could foresee the discovery of DNA and the ability to manipulate the code of life, nor the development of space travel and the ability to visualize distant regions of deep space. Optics decentered humanity within the solar system, within the cosmos.

Sonics is the optics of the 21st century. Like the microscope, sonic technologies function like a scientific prosthetic — as it extends our sense of hearing, it expands our perceptual and conceptual horizons. We are encountering new soundscapes throughout the cosmos and across the Tree of Life, learning about the universality of acoustic meaning-making and the primordial sensitivity of living organisms to sound. Sonics decenters humanity within the Tree of Life, while connecting us more closely to the cosmos.

In the 16th and 17th centuries, the early days of optics, the term “revolution” was used to refer to celestial bodies revolving, implying cyclical movements rather than linear progress, a restoration of alignment rather than a break with the past. The notion of revolution as a violent rupture and rejection of the preceding scientific, political and economic order came to prominence only in the 18th century.

Sonics is a revolution in the newer sense, but it might also be a revolution in the older sense too, a circling back to something we had once known but have since largely ignored or forgotten: the primordial importance of sound to life, to the universe, to our bodies. The world is sounding all around us. Choosing to listen, and developing new data sonification techniques enabled by advanced technologies, will enable us to sense the cosmos, the Earth and our own selves anew.

The post The Sounds Of Invisible Worlds appeared first on NOEMA.

]]>
Listening To The Creatures Of The World https://www.noemamag.com/a-parliament-of-earthlings Thu, 09 Mar 2023 00:18:15 +0000 https://www.noemamag.com/a-parliament-of-earthlings The post Listening To The Creatures Of The World appeared first on NOEMA.

]]>
On a chilly morning in January 1952, Alan Baldridge witnessed a murder. Sailing off the coast of California in pursuit of a pod of migrating whales, he heard screams in the distance. The pod abruptly vanished. Scanning the horizon, he spotted a large gray whale “spy-hopping,” swimming vertically and raising its head above the surface. Baldridge, a marine biologist at Stanford, decided to investigate; drawing closer, he saw seven orcas singing hunting cries, circling a small gray whale calf. As its mother watched nearby, the orcas began devouring the lips, tongue and throat of the dead baby. 

Baldridge’s story inspired a controversial research agenda. Soon after his encounter, the Navy began using orca sounds in an attempt to control cetaceans. Their hypothesis: Whales could decode information from sound, a contrarian claim in an era when most researchers believed that animal noise was devoid of meaning. One of the Navy’s first experiments involved sailing a catamaran off the coast of San Diego, playing recorded orca screams to gray whales swimming south on their annual migration. The results were “spectacular”: The whales whirled around and fled north or hid deep in nearby kelp beds, slowly popping their heads above the surface to search for predators. When they finally resumed swimming south, the whales were in stealth mode: sneaking past, with little of their bodies showing above the surface, their breathing scarcely audible.

The Navy’s next experiment was in Alaska, where a local fish and game official named John Vania was at war with beluga whales along the Kvichak River, home to the largest red salmon run in the world. While bears and eagles feasted on the shore, belugas would surf the mighty tide up the muddy brown estuary, feeding on the endless conveyor belt of salmon swimming toward the sea. After the fishermen complained that belugas were eating too many fish, Vania tried chasing the whales with motorboats, blaring rock music, even throwing small charges of explosives — all in vain. 

But when he pumped the Navy’s orca recordings through jerry-rigged underwater speakers, every single beluga immediately turned and fled. On the Alaska coast, some tides are strong enough to fling large boulders into the forest, but the belugas would battle even the strongest tide surge in order to escape. And although they responded to hunting screams, the belugas appeared most frightened of orca “clicks,” as if a warning was encoded in the staccato sounds.

At the time, industrial whaling and dolphin hunting were still permitted. Whalers killed tens of thousands of bowhead, sperm and right whales annually, their oil and other parts rendered into lubricant, perfume and lipstick. Canadian government officials mounted a .50-caliber Browning machine gun on a promontory north of Vancouver with the sole aim of slaughtering orcas, which were viewed as pests by local fishermen — although they never fired the gun. In pursuit of tuna, fishermen killed an estimated 6 million dolphins in the Eastern Pacific in a few short decades following the Second World War. 

“Nonhumans were once believed to be largely deaf and mute, but now we are realizing that in nature, silence is an illusion.”

But a ragtag organization named Greenpeace was beginning to send protestors on small inflatable boats into the northern Pacific and Alaskan waters to protect whales from bullets and harpoons. Their efforts, caught on camera, inspired public outrage. A global movement to save the whales led to a commercial moratorium on industrial whaling, and to “dolphin safe” fishing legislation. 

Now no longer permitted to kill cetaceans, fishers began using acoustic deterrents; the devices were often mandated by national governments on fishing boats, fish farms and even fishing nets. A truce of sorts was declared between cetaceans and humanity. 

The apparent benefits were short-lived. Acoustic deterrence creates damaging side effects, including hearing impairment. In the underwater world, where sound travels faster than it does through air, cetaceans use echolocation (also called biosonar) to “see” the world through sound. Human noise pollution renders cetaceans and other marine organisms near deaf and blind, unable to echolocate, communicate or find prey. 

When the din of motors and seismic blasts is added to acoustic deterrence devices, cetaceans can find themselves caught in a blinding acoustic fog, unable to detect approaching ships. Marine traffic accidents are now a primary cause of whale deaths. 

Although no longer using bullets and bombs, humans are still killing cetaceans by the tens of thousands every year. Could digital technologies provide a solution?

Digital Whales 

The Santa Barbara Channel — through which the world’s largest whales, on one of the world’s longest migrations, move past some of the busiest ports in the world — is a global nexus for marine roadkill. No better place, then, for developing a Waze for whales.

Whale Safe is the creation of scientists at UC Santa Barbara and is funded by Marc Benioff, who was apparently inspired to create Salesforce while swimming with dolphins off the coast of Hawaii. An AI-powered monitoring system, Whale Safe creates virtual whale lanes to enable safe passage for cetaceans and prevent ship strikes in near real-time.

The system incorporates five digital technologies: an underwater acoustic monitoring system that detects whale calls; AI algorithms that detect and identify blue, humpback or fin whales in near real-time; oceanographic modeling combining satellite and digital buoy data with ocean circulation models and animal tags; whale sighting data reported by citizen scientists, mariners and whale-watchers using mobile apps; and locational data from ships’ automatic information systems (a mandatory global system of satellite tracking that enables precise monitoring of ships’ locations at all times). 

The output: a whale presence rating overlaid on a map, similar to a weather report, which is relayed in near real-time to ship captains, who can decide to slow down or leave the area altogether. The Whale Safe team also tracks ships to see if they are complying with slow-speed zones and publishes public report cards tracking compliance, naming and shaming ships that fail to comply. 

“Simply by singing, a whale can turn aside a container ship: a digitally mediated decentering of the human.”

Scientists are also developing infrared thermal imaging cameras to mount on the bows of ships to detect whales — and whale strikes. Killing cetaceans used to happen out of sight, but with dashcams mounted on ships, whale sightings could be automatically reported and whale deaths automatically recorded. Initial studies confirm a reduction in whale strikes by at least half.

Similar digital whale protection systems have also been implemented on the east coast of North America, where aquatic drones now roam the Atlantic, searching for endangered right whales in the Gulf of St. Lawrence, which transports over 100 million metric tons of cargo a year. The whales’ location is pinpointed using digital bioacoustics, their trajectory forecast using AI algorithms trained on datasets of whale movements, and the information is conveyed to ships’ captains and fishing boats, who face stiff fines of several hundred thousand dollars if they fail to slow down and leave the area. 

Digitally enabled whale lanes now trump shipping lanes in some places: Only a few decades ago, North Atlantic right whales were hunted to the brink of extinction, but today, less than 400 of them have been empowered to control the movements of thousands of ships in a region home to 45 million people.

Scientists are now advocating for similar systems to be created around the world. Some are proposing an ambitious agenda: a digitally enabled global network of Marine Protected Areas (MPAs) whose boundaries change position as endangered species migrate through the oceans, and which literally “follow the fish.” Endangered tuna off the coast of Australia and turtles off the coast of Hawaii are now being protected by an array of digital tracking devices — sensors, acoustic drones, satellites — which feed data to machine learning algorithms in order to precisely forecast the location of endangered species. Mobile protected areas provide a flexible, responsive web of protection — a digital cloak of inviolability in a changing sea. 

In an era of rapid global warming-induced changes in the world’s oceans, in which many marine species are becoming climate refugees, policymakers are now debating how we might apply these systems at the planetary scale. Near real-time, mobile and potentially spatially ubiquitous form of ocean governance relies on digital hardware that collects data from various sources (like nano-satellites, aerial and underwater drones, environmental sensor networks, digital bioacoustics and marine tags), combined with machine learning algorithms, computer vision and ecological informatics. 

“The world resonates with nature’s sounds, which human ears cannot detect. But our computers can.”

But its most novel aspect is its agility: responsive, near real-time adaptation to environmental variability, species mobility and disturbance dynamics. It’s a fitting governance model for an increasingly unpredictable world of environmental hazards and extreme events, and a timely response to the new global commitment, reached at the U.N.’s global biodiversity conference in late 2022, to protect at least 30% of Earth’s land and water by 2030. 

These digitally enabled ocean conservation schemes benefit humans as well as the whales. When ships slow down, they not only reduce whale strikes but also release fewer pollutants and emit less carbon dioxide. Moreover, whales’ nutrient-rich waste acts like a fertilizer for phytoplankton, which sequester enormous amounts of carbon. IMF economists have estimated the value of the ecosystem services provided by each individual whale (of the largest species) at over $2 million and called for a new global program of economic incentives to return whale populations to pre-industrial whaling levels as a “nature-based solution” to climate change.

Perhaps the most novel aspect of these digital ocean governance schemes is their inclusion of nonhumans into decision-making. Simply by singing, a whale can turn aside a container ship: a digitally mediated decentering of the human. 

Marine navigation becomes a matter of interspecies cooperation, as whales influence and constrain human action by controlling the decisions and movements of ship captains and fishers. Nonhumans, enabled by digital computation, are being enrolled in ocean governance, in stark contrast to the way that humans treated these species only a few decades ago, a grounded example of what Dipesh Chakrabarty calls the extension of “ideas of politics and justice to the nonhuman”: multispecies environmental regulation.

Wiring Gaia

Whale Safe illustrates the remarkable change underway in planetary environmental governance. Confronted with accelerating biodiversity loss, scientists and conservationists are adapting digital tools to achieve conservation goals. Digital environmental monitoring and decision-making platforms are operational on every continent, in every major biome on Earth. Repurposed cellphones, hidden high the tree canopy in tropical forests, are surveilling illegal loggers. Anti-terrorism software is being used to help predict and prevent poaching. Artificial intelligence algorithms use facial recognition to identify individual animals — from zebras to whale sharks — helping to track members of endangered species. 

At some point in the past 24 hours, a flock of nano-satellites called Doves flew over your head: the first system able to image the entire surface of the Earth every day. Its developers — a team of ex-NASA engineers — are building a search engine for the entire surface of the planet that will operate in near-real time. One day soon, you will be able to search the surface of the Earth just like you search the web for images or text. Satellites are also being used to identify greenhouse gas emissions like methane; NGOs are publishing “name and shame” lists of the world’s biggest climate polluters. 

These technologies are akin to those used in “smart cities” but articulated across a much wider range of ecosystems and land use types: a “Digital Earth,” monitored by systems of satellites and sensors that are increasingly instrumented, interconnected and intelligent. Digital Earth networks undertake a form of nested planetary computation, incorporating not only climate but also living beings, both biotic and abiotic elements of Gaia.

Digital Earth technologies have several implications for environmental governance. First, environmental data is becoming super-abundant rather than scarce. Second, environmental data is becoming ubiquitous: automated sensors, satellites and drones collect data continuously, even in remote places that humans find difficult to access, sensing and managing the environment everywhere, all the time. This creates time-space compression (governance is temporally and spatially ubiquitous) and time-space agility (governance is spatially and temporally dynamic). 

“Digital Earth networks undertake a form of nested planetary computation, incorporating not only climate but also living beings, both biotic and abiotic elements of Gaia.”

Third and most powerful of all: Rather than responding to environmental crises after they occur, digital technologies enable near-real time responses and may even predict hazards and catastrophes before they happen. Environmental governance can thus be preventive rather than reactive, and environmental criminals will find it harder to hide. Crowdsourcing and citizen science can be used to involve the public in conservation efforts; sites such as Zooniverse herald a resurgence of public engagement in science akin to the Victorian era. Although the incursions of Big Tech into this space are cause for concern, the engagement of thousands of not-for-profit conservation groups in this agenda raises the likelihood that digital environmental governance might evolve to be more inclusive, enabling new patterns of inclusion, subsidiarity and solidarity.  

Digital Earth also has the potential to be a multispecies affair, enrolling what Achille Mbembe refers to as “le vivant” (the living, enlivened, lively world) into planetary governance. Digital technologies may allow nonhumans to participate as active subjects in environmental management. Planetary computation, in other words, is not merely a set of tools for monitoring and manipulating the planet, but also a potential means of extending political voice to nonhumans, akin to what Isabelle Stengers terms “cosmopolitics.”

Planetary computation and planetary governance are thus not merely extensions of the old engineering mantra of “command and control.” Instead, they offer us a new paradigm: “communicate and cooperate,” which extends a form of voice to nonhumans, who become active subjects co-participating in environmental regulation, rather than passive objects. The environmental becomes inescapably political, but the political is not solely human. Digital Earth technologies offer the possibility of creating what Bruno Latour once called the “Parliament of Things”: a digitally enabled Parliament of Earthlings.

Can The Planet Speak?

Humans might choose to listen to nonhumans, but do they have anything meaningful to say? Here, too, Digital Earth technologies offer insights. In the past two decades, digital bio- and eco-acoustic networks have been deployed from the Arctic to the Amazon, recording and decoding nonhuman sounds — many of which occur beyond human hearing range in the high infrasound or low ultrasound. The proliferation of these digital listening systems reveals that much more meaningful information is encoded in acoustic communication within and between species than humans suspected.  

Many species that scientists once thought to be mute or relatively vocally inactive actually make sound. To give just one example, researchers have recorded sounds made by over 50 fish and turtle species — once thought to be voiceless — making hundreds of different sounds, revealing complex coordination behaviors, evidence of parental care and a remarkable ability of embryos of at least one turtle species to time the moment of their collective birth through vocal communication. Peacocks emit loud, low infrasound during their mating dances. Elephants use similar frequencies to communicate across long distances, seemingly telepathically. 

Nearly every species to which scientists have listened makes some form of sound. Nonhumans were once believed to be largely deaf and mute, but now we are realizing that in nature, silence is an illusion. 

What else are we learning through digitally mediated listening? There is much more ecologically complex information contained in nonhuman vocalizations than we realized. Elephants have specific signals for different threats such as honeybees and humans, and their vocalizations even distinguish between humans from different tribes; researchers are now building an elephant dictionary with thousands of sounds. Honeybees also have hundreds of distinct sounds; although we have only deciphered a few, we know that there are specific sounds in honeybee language — which is spatial and vibrational as well as acoustic — with specific meanings, such as a “stop” signal and a begging signal. Queens even have their own distinct vocabulary.

Joining this biophony is a resounding geophonic chorus from the planet itself: the low frequencies of volcanoes and hurricanes, calving glaciers and earthquakes, ringing the atmosphere like a quiet bell. The world resonates with nature’s sounds, which human ears cannot detect. But our computers can.

“Digital technologies enable near-real time responses and may even predict hazards and catastrophes before they happen.”

Digital listening is also revealing that interspecies communication is much more widespread than scientists previously understood. Moths can detect bat and even jam bat sonar. When buzzing bees approach flowers, the flowers flood themselves with nectar within minutes. Plants can detect the sound of specific insect predators and distinguish threatening from non-threatening insects with astonishing precision. Corn, tomato and tobacco plants emit high-pitched infrasound that we can’t hear, but insects likely can; in one experiment, researchers trained an AI algorithm on the distinct sounds emitted by healthy, dehydrated and wounded plants — and the algorithm was soon able to diagnose the plants’ condition, simply by listening. Although these ultrasounds are beyond human hearing range, we know that some insects can hear them. Could other creatures be listening to the plants and detecting their state of health? 

Sound is only one modality of nonhuman communication; other species use many mechanisms — from the gestural to the biochemical to the electrostatic — to communicate information and even emotions. Digital technologies can detect these multimodal forms of information, whether from forests or honeybees. As humans deploy digital technologies to enhance our ability to monitor and decode this information, we create the potential for a new type of environmental governance, and a new type of multispecies politics. 

Complex communication is ubiquitous in nature, and thus many nonhumans could be said to possess a form of political “voice.” Modern humans have been hard of hearing, yet Digital Earth technologies offer new ways of listening to nonhuman preferences. This is by no means novel (Indigenous traditions offer powerful ways of nonhuman listening) nor neutral (digital technologies can be misused and abused). But with caveats and safeguards, digital technologies offer humanity a powerful new window into the nonhuman world. 

Think of planetary computation as one means of eavesdropping on multispecies conversations, in which nonhumans can use digital technologies to convey information, influence human action and thus express a grounded form of voice. How might nonhuman preferences be incorporated into our decision-making frameworks, into new forms of Earthly politics? To begin formulating an answer, we’ll have to listen more closely to our nonhuman kin.

The post Listening To The Creatures Of The World appeared first on NOEMA.

]]>
How To Speak Honeybee https://www.noemamag.com/how-to-speak-honeybee Wed, 02 Nov 2022 12:46:06 +0000 https://www.noemamag.com/how-to-speak-honeybee The post How To Speak Honeybee appeared first on NOEMA.

]]>
The waggle dance of the western honeybee (Apis mellifera), in which bees waggle their abdomen from side to side while repeatedly walking in an intricate figure-of-eight pattern, has been observed since antiquity, but the person who finally unlocked the secret of its meaning was an iconoclastic Austrian researcher named Karl von Frisch. The breakthrough initially earned Frisch a great deal of scorn from other mid-20th-century scientists, but also eventually won him the Nobel Prize.

Young Karl was known to skip school to spend time with a menagerie of over 100 animals, only nine of which were mammals. His most beloved companion was a small Brazilian parakeet named Tschocki, who was constantly by Frisch’s side, sitting on his lap or on his shoulder and even sleeping next to his bed. Together with Tschocki, Frisch spent hours out in nature, simply watching. As he later reflected: “I discovered that miraculous worlds may reveal themselves to a patient observer where the casual passerby sees nothing at all.”

Frisch began studying bees in 1912. He had a hunch that ran counter to prevailing wisdom: The bees’ waggle dance was a form of language. In pursuing this hypothesis, he was contesting two core assumptions of Western science and philosophy: that only humans have complex forms of language, and that insects were incapable of complex communication given their tiny brains.

Human verbal language is largely based on the noises we make with our vocal cords and mouths, the expressions we make with our faces, and the way we hold and move our bodies. In contrast, bee language is mostly spatial and vibrational. Its syntax is based on something very different from human language: the type, frequency, angle and amplitude of vibrations made by the bees’ bodies, including their abdomens and wings, as they move through space. 

By buzzing and quivering, leaning and turning, bees communicate remarkably accurate information. Once a scout bee has found a good food source, she returns to the hive to inform her sisters. During the waggle dance, the bee moves in a figure eight pattern: a straight line while beating wings, and then a circular return without wing beating. We know now that the resulting pattern, which can be observed visually, encodes the direction to the food source relative to the sun’s position in the sky; the length of the dance is related to the distance the bees must travel. 

“Miraculous worlds may reveal themselves to a patient observer where the casual passerby sees nothing at all.”
— Karl von Frisch

Frisch decided on an ambitious experimental design: tracking thousands of individual bees in order to analyze the correlation between their dances and specific food sources. At the time, this seemed impossible, given that hive populations average somewhere between 10,000 and 40,000 bees. But Frisch, through painstaking attention to detail and near endless amounts of patience, was able to prove his hypothesis: As a lead bee dancer waggles, she orients her body relative to gravity and the position of the sun. By making subtle variations in the length, speed and intensity of her dance, she is able to give precise instructions about the direction, distance and quality of the nectar source. In so doing, she teaches other bees in the hive, who use the information they have learned from the waggle dance to fly to a nectar source they have never before visited.

Frisch’s research progressively proved the astonishing accuracy of the bees’ communication system. In one of his most famous experiments, he trained his bees to navigate to a hidden food source several miles away, across a lake and around a mountain. This was an astounding feat, given that he had shown the site once to only a single bee. In another experiment, he demonstrated that different hives have slightly different dancing patterns. Bees appeared to learn these patterns from their hive mates. In essence, honeybee dance language has dialects, just like human communities.

Frisch himself was so amazed by his findings that he initially kept them secret. Contradicting prevailing scientific views, his findings demonstrated that honeybees possessed learning, memory and the ability to share information through symbolic communication, a form of abstract language. As he wrote to a confidante in 1946: “If you now think I’m crazy, you’d be wrong. But I could certainly understand it.” 

Frisch was right to worry. When he finally went public, many scientists dismissed his research and argued that insects with such tiny brains were incapable of complex communication. The American biologist Adrian Wenner launched a challenge to Frisch’s theory, arguing that bees locate foods solely by odors, a theory that was subsequently proved wrong, although odors are important signals for bees. Eventually, Frisch’s results were definitively and independently validated, and he was awarded the Nobel Prize in 1973. The prize committee concluded its nomination statement by referring to the “shameless vanity” of Homo sapiens that refused to recognize bees’ extraordinary capacities.

Frisch referred to honeybee dances as a “magic well”: The more he studied them, the more complex they turned out to be. Every species, Frisch argued, has its own magic well. Humans have verbal language. Whales have echolocation, which endows them with the ability to visualize their entire environment via sound. Honeybees have spatial, embodied language: We now recognize some of the subtle differences in their body movements and vibrations, which include waggling, knocking, stridulating, stroking, jerking, grasping, piping, trembling and antennation, to name just a few. 

The bees’ dance is still considered by many scientists to be the most complex symbolic system that humans have decoded to date in the animal world. Although many scientists initially asserted that the waggle dance should be referred to merely as communication, Frisch insisted on using the term language: Through a system of signs, bees exchange information, coordinate complex behavior and form social groupings.


Honeybee researchers following in Frisch’s footsteps have probed the magic well even more deeply. Bees make many other types of signals through nuanced movements, communicating through sounds and vibrations largely either inaudible to or indecipherable by humans. Moreover, by using computer software that automates the decoding of bee vibrations and sounds — vibroacoustics, as the field is known — researchers are now using algorithms to analyze bee signals. Their discoveries are as incredible as Frisch’s first breakthroughs. 

Although it has been known for centuries that queens have their own vocabulary (including tooting and quacking sounds), researchers have found new worker bee signals, such as a hush (or stop) signal that can be tuned to specific types of threats and a whooping danger signal that can be elicited by a gentle knocking of the hive. Worker bees also make piping, begging and shaking signals that direct collective and individual behavior. 

“Honeybees exhibit sophisticated forms of democratic decision-making.”

Bees have excellent eyesight and are capable (after minimal training) of distinguishing between Monet and Picasso paintings. They can differentiate not only between flowers and landscapes but even human faces, demonstrating a remarkable capacity for processing complex visual information. In two breakthrough experiments in 2016 and 2017, researchers demonstrated that bees are capable of social learning and cultural transmission (a first in Western science for invertebrates): When trained to pull a string to receive a sugar reward (a novel task), bees taught the new skill to their hive mates, demonstrating that bees can learn from observing other bees, and that these learned skills can be shared and become part of the culture of the colony. 

A dark side of bee social life has also been uncovered: While honeybees are generally collaborative, accurate and efficient, they are also capable of error, robbery, cheating and social parasitism. They might even have emotions, exhibiting both pessimism and dopamine-induced mood swings that are analogous to human highs and lows. 

As one researcher cautiously noted in a landmark study of a newly identified bee signal: “Communication in honeybees turns out to be vastly more sophisticated than originally imagined. Research is revealing … a collective intelligence that … makes one pause to ask whether these creatures may be more than just simple, reflexive, unthinking automata.”


Perhaps the most remarkable research is that of Cornell bee scientist Thomas Seeley, who has demonstrated that honeybee language extends beyond foraging behavior. For several decades, Seeley focused his research on bee swarming. Swarming is the way honeybee colonies naturally reproduce; a single colony splits into two or more distinct colonies, and one group flies off to find a new home. How, Seeley wondered, did the colony decide on their preferred site? 

When Seeley first decided to focus on swarming, scientists knew very little about the phenomenon. The fastest bees in a swarm fly over 20 miles per hour, usually moving in a straight line toward their target regardless of the fields, water bodies, buildings, hills or forests in their way. There is no way a human can keep up with the swarm, much less keep track of several thousand individual bees to figure out which ones, if any, are guiding the rest. Seeley was interested in how the bees decided which home to select — a high-stakes decision, given that splitting the hive could cause the queen to be lost, and choosing an inappropriate site could lead to the death of the hive.

In the mid-2000s, Seeley convinced a computer engineer who was intrigued by the similarities between bee swarms and driverless cars to install a high-powered video camera at Seeley’s research site on Appledore Island, off the coast of Maine. Their goal was to create an algorithm that could automatically identify and track some 10,000 speeding bees at once. 

“Communication with bees is an ancient human skill.”

After two painstaking years, the algorithm finally worked: Powered by high-speed digital cameras and novel techniques in computer vision, it could identify each individual bee from the video footage and analyze its unique frenzied flight pattern. The algorithm revealed patterns undetectable to the human eye; decoding the diversity, density and interactions in these patterns led Seeley to label the swarm as a “cognitive entity.” 

Perhaps Seeley’s most startling finding was that, in choosing a new home, honeybees exhibit sophisticated forms of democratic decision-making, including collective fact-finding, vigorous debate, consensus building, quorum and a complex stop signal enabling cross-inhibition, which prevents an impasse being reached. A bee swarm, in other words, is a remarkably effective democratic decision-making body in motion, which bears resemblance to some processes in the human brain and human society. Seeley went so far as to claim that the collective interactions of individual bees were strikingly similar to the interactions between our individual neurons when collectively arriving at a decision. 

Seeley’s findings bolstered the arguments of those who argued in favor of referring to honeybee communication as language. And by demonstrating that the “hive mind” was more than mere metaphor, Seeley also stimulated advances in swarm intelligence in robotics and engineering. Seeley’s research, predicated on digital technology (computer vision and machine learning) eventually came full circle: His findings inspired computer scientists at Georgia Tech to create the Honey Bee algorithm, which is now an integral part of cloud computing: In internet hosting centers (analogous to hives), it optimizes the allocation of servers (foraging bees) among jobs (nectar sources), thereby helping to deal with sudden spikes in demand and preventing long queues. In 2016, Seeley and his collaborators were awarded the Golden Goose Award, which recognizes apparently esoteric research that later proves to be extremely valuable.

Dancing Honeybee Robots

Thanks to Frisch and his successors, researchers have long known that bees react differently to distinct vibration patterns that act like signals. In the past few years, the combination of computer vision with miniaturized accelerometers (ultrasensitive versions of the motion-detecting sensors in your cell phone) has enabled scientists to decode the specific subtle vibration signals made by living organisms — vibrations that are vital to their communication but largely undetected by humans. Indeed, these technological advances have made it possible to analyze bees’ communication and activity over their entire lifespan.

The next breakthrough — bridging what engineers call the “reality gap” between robots and living bees — is the creation of robots that accurately mimic these vibration patterns. Tim Landgraf, a professor of mathematics and computer science in Berlin, has devoted himself to this task for the past decade. Much of his research has focused on automating identification of individual bees and tracking their movements using computer vision and machine learning. One experiment analyzed around three million images taken over three days and tracked the trajectories of every single member of a honeybee hive — with only a 2% error rate.

Landgraf’s most innovative work involves creating robotic devices to communicate with honeybees in their own language. Working together with colleagues in the Free University of Berlin’s Center for Machine Learning and Robotics, Landgraf built a simple robot, which they christened RoboBee. Early prototypes “sucked,” as Landgraf put it: The bees would attack them, biting, stinging and dragging them out of the hive. 

“Honeybee dance language has dialects, just like human communities.”

The seventh prototype was the breakthrough. A statistically significant number of bees would follow the RoboBee’s dance and then fly to the specific location that Landgraf had coded into his honeybee robots. He had created, in essence, a bio-digital equivalent to Google Translate for bees. 

Some of his robots’ commands are successful with the bees and sometimes not, and Landgraf still isn’t sure why. His current hypothesis is that a separate, prior signal needs to be issued first, like a handshake before a conversation can begin. His robotic bees may sometimes be emitting this signal merely by chance, and in those cases the bees in the hive will listen. Or perhaps a separate vibrational signal from a different device is also needed; one such tool, recently invented by Cornell bee researcher Phoebe Koenig, accurately mimics the “shaking” signal that bees use to activate behavior.

One day, he hopes, the RoboBees will be viewed as “native” by the honeybees themselves, able to issue commands and recruit bees to fly to specific locations by waggle dancing. Future robots might even learn local bee dialects, which vary with habitat. And this is only the tip of the iceberg; his work could enable the possibility of understanding how the colony itself processes and integrates different kinds of information, somewhat like a living distributed computer with thousands of tiny, interconnected brains.

Landgraf is now going beyond bee monitoring and trying to build smart hives that are two-way communication devices. Vibrational, acoustic and pheromone signals could be released to warn bee colonies about threats (such as nearby fields treated with pesticides, or approaching storms) or to guide bees to find the best food sources available. 

Honey Hunters

As groundbreaking as these innovations might sound, Landgraf is not the first to have discovered how to speak to bees using vibroacoustics. Communication with bees is, in fact, an ancient human skill. 

The earliest known vibroacoustic device, the bull-roarer, is regarded as one of humanity’s oldest musical instruments. Used in ceremonies by Indigenous peoples on all continents and in the Dionysian Mysteries by the ancient Greeks, it has a lesser-known function as a bee-hunting device. A bullroarer (turndun or bribbun to Australian Indigenous communities, kalimatoto padōk to the Pomo tribe in California) is deceptively simple: A long string or sinew is attached to a thin, rectangular piece of wood, stone or bone that is rounded at the ends. The cord is given a slight initial twist, and then the bullroarer is swung around in a circle. The resulting noise, caused by air vibrating between 90 and 150 Hz, is a surprisingly loud sound similar to a propeller. The effect is startling and palpable: a resonating hum in your bones, like standing within a giant swarm of bees.

Africa’s /Xam (San) use bullroarers to cause bees to swarm and to direct them to new hives at locations that are easy for humans to access. The /Xam word for bullroarer is “!goin !goin,” which literally means “to beat” — like beating a drum. The bullroarer is spun in tandem with a dance that puts the /Xam into a trance-like state through which elders call upon and guide the bees. (Modern beekeeper practice employs a simple version of this method, called tanging, to calm bees and direct them to a hive.) Long before Western science discovered vibroacoustics, the /Xam had developed a nuanced understanding of bee communication. Anthropologists speak of a “copresence” that the /Xam developed with bees, based on mimetic sound capacities.

The /Xam are not unique in their ability to communicate with bees. In parts of Africa, people searching for honey are led to beehives by a bird: the greater honeyguide (its Latin name, Indicator indicator, is a bit of a giveaway). Honey hunting is an ancient art; some of the earliest recorded rock paintings in the world show humans hunting wild bees. And the animal kingdom’s preeminent honey hunters are honeyguides. 

“Honeybees can differentiate not only between flowers and landscapes but even human faces, demonstrating a remarkable capacity for processing complex visual information.”

Honeyguides are one of the only birds (and few vertebrates) on the planet that eat beeswax. Rich in nutrients and energy-giving lipids, beeswax is a sought-after treat for the birds. But most honeybee nests in Africa are well hidden in tree cavities, guarded by fierce bees that can kill the birds if they come too close. Honeyguides — likely guided by their strong sense of smell — know where the bees are but can’t get at the wax. So they partner up with an animal that isn’t nearly as good at finding bees but knows how to get the wax: humans.

In hunting together, the honeyguides and honey hunters have evolved a subtle form of cooperative communication. First, the hunters make their special call, signaling that they are ready to hunt honey. In the case of the Yao hunters in the Niassa National Reserve in Mozambique, who were the focus of researchers led by Claire Spottiswoode at Cambridge University, this sound is something like a brrr-hmmm: a loud trill followed by a grunt. In return, the honeyguides approach and sing back to the hunters with a special chattering call. 

The birds then fly in the direction of the bees’ nest, followed by the hunters. When the birds’ chatter dwindles and they stop flying, the hunters know they are close. They scan the tree branches and hit nearby tree trunks with their axes to provoke bees into revealing the location of the nest. The hunters then make a bundle of leaves and wood and set it alight just under the nest, smoking the bees into lethargy before felling the trees with their axes and chopping open the nest. As they fill buckets to take back home, flinging away dry combs containing no honey, they expose food for the birds. The honeyguides wait patiently, flying down to feed only after the humans are gone. Before the Yao hunters depart, they gather up the wax and present it on a little bed of fresh green leaves, honoring the contribution of the birds to their hunt.

Scientists have confirmed the claims of the Boran people in northern Kenya that they can infer the distance, direction and time to the nest from the bird’s calls, perching height and flight patterns. Spottiswoode also confirmed reciprocal signaling among the Yao: When honey hunters made their special sound, the probability of being guided by a honeyguide increased from 33% to 66%, and the overall probability of finding a bees’ nest from 17% to 54%.

We might expect the ability to interpret human sounds from trained animals like falcons and dogs, and even some wild animals like dolphins, but from wild birds? The sounds exchanged between hunters and honeyguides are also not the same across Africa. They are learned from elders, passed down from one generation to the next. How do birds learn to communicate with humans? We don’t actually understand this yet, but we do know that honeyguides don’t learn cooperative hunting from their parents. Honeyguide nestlings never meet their parents, as the species is brood parasitic: Adults lay their eggs in other birds’ nests, puncturing any host eggs they find to enhance the honeyguide hatchlings’ survival rate. Then the adult honeyguides leave. Right from birth, honeyguide hatchlings are equipped with sharp, hooked beaks, which they often use to kill any unfortunate host chicks that managed to survive. 

So how do the honeyguides learn the sounds? Spottiswoode and her colleagues are combining digital technologies with traditional knowledge to find out. They have developed a customized app that enables honey hunters to collect data on their activities. Deep in the forests of the Niassa, an area the size of Denmark with few roads and no internet connectivity, Yao honey hunters are roaming the forest armed with handheld Android devices, earning income from Cambridge University as digital conservation research assistants, singing to their honeyguide companions as they search for bees.

Governing The Swarm

Proponents of smart hives argue that digital technologies offer the potential to enhance environmental protection in a partnership between humans, insects and AI-enabled robots. Smart hives could use sensors and cameras to monitor bees and provide them with information to guide crop pollination and avoid polluted sites. The same technologies might be used to harness bees to map zones too dangerous for humans to reach, or power swarm robots to support environmental conservation, or even help out with search-and-rescue missions. 

As data accumulates, a twinning effect emerges, with some beehives now also existing virtually in digital bee world that mirrors the physical one. This may help turn the tide in our race to save not only honeybees but many other species as well. When gathering nectar, bees continuously sample from the environment, so who better to act as a sentinel for environmental risk? Bees and other insects have been successfully trained to detect a range of chemicals and pollutants. Decoding a large number of dances from a specific area could help evaluate landscapes for sustainability and conservation. It could also make pollination more efficient and provide insights into how to ward off the widespread, alarming phenomenon of colony collapse disorder. Bees could also be recruited as live bioindicators: surveying, monitoring and reporting the landscape in a fine-grained, inexpensive way that would be impossible for humans to achieve alone. 

But these technologies also create opportunities to weaponize bees. Bees have a long history with the military, and recently they have become instrumental in some security objectives. In the United States, the military has been actively testing bee bio-detectors in antinarcotics, homeland security and demining operations. The mobilization of what military scientists call “six-legged soldiers” requires genetic and mechanical manipulation of the bees’ nervous systems, migration patterns and social relationships. 

The Stealthy Insect Sensor Project, for example, trains bees to extend their tongues when they detect dangerous chemicals. As Jake Kosek writes in Cultural Anthropology, once trained, individual bees can be used in military monitoring devices. Trained bees are inserted into cartridges in monitors carried by soldiers. When bees react to, say, military-grade explosives, the microchip in the monitor translates this signal into an alarm. The trained bees live for no more than a few weeks, dying within the cartridge. A replacement cartridge is shipped to the soldier, and, according to the scientist responsible for the project, “you simply slip out one bee cartridge and replace it with another.” 

“Technology-driven advances in our ability to understand how bees live and communicate should hardly be repurposed to turn them into living tools for conducting warfare.”

Mobilizing bees to detect dangerous explosives might be beneficial for military personnel, but the manipulation and casual disposal of honeybees at scale should give us pause. Technology-driven advances in our ability to understand how bees live and communicate should hardly be repurposed to turn them into living tools for conducting warfare. 

There are other ways of thinking about our relationships with bees. For traditional cultures like the /Xam and the Yao, communicating with bees is embedded in sacred ceremony. Honey is both a practical and a spiritual matter, both food and sacrament. This view is not limited to hunter-gatherers in Africa; the earliest Neolithic representations of bee goddesses from Europe are over 8,000 years old. And many of humanity’s oldest written texts celebrate bees’ divinity. Almost 3,000 years ago, the scribes of the Brihadaranyaka Upanishad, a key text in Hinduism, recorded the “Honey Doctrine” — a theory of the organic, interrelated nature of life, wherein honey personifies cosmic nourishment for the luminous ground of being: “this earth is honey for all creatures, and all creatures are honey for this earth.”

To witness biohybrid bees engaging in reciprocal (if rudimentary) interspecies communication gives me a numinous sense of awe. To witness bees being converted into disposable, militarized sensing devices gives me a sense of dread. These two choices are emblematic of humanity’s relationship with nature. Will we choose dominion or kinship?

If we choose the latter, there is likely to be a great deal more for bees to say to us and for us to say to them. And they will not be the only species that humans engage in dialogue. 

The post How To Speak Honeybee appeared first on NOEMA.

]]>