We estimate that there are between about 10^24 neurons on earth, with about an order of magnitude uncertainty. Most of these are from insects, with significant contributions from nematodes and fish. For insects, we multiplied the apparent number of insects on earth by the number of neurons in a small insect, the fruit fly. Most other classes of animal contribute 10^22 neurons at most, and so are unlikely to change the final analysis. For nematodes, we looked at studies that provide an average number of nematodes per square meter of soil or the ocean floor, and multiplied them by the number of neurons in Caenorhabditis elegans, an average-sized nematode. Fish may also play a significant role. We neglected a few categories that probably aren’t significant, but could conceivably push the estimate up.
Using a similar but less precise process based on evolutionary history and biomass over time, we also estimate that there have been 10^33 neuron-years of work over the history of life, again with around an order of magnitude of uncertainty.
This is a graph of extinction events over the history of animal life.
There are five canonical major extinction events that have occurred since the evolution of multicellular life. Biotic replacement has been hypothesized as the major mechanism for two of them: the late Devonian extinction and the Permian-Triassic extinction. There are three other major events – the Great Oxygenation Event, End Ediacaran extinction, and the Anthropocene / Quaternary extinction.
Let’s look at four of them. The first actually occurs right before this graph starts.
I decided not to discuss the Great Oxygenation Event in the talk itself, but it’s also an example – photosynthetic cyanobacteria evolved and started pumping oxygen into the atmosphere, which after filling up oxygen sinks in rocks, flooded into the air and poisoned many of the anaerobes, leading to the “oxygen die-off” and the “rusting of the earth.” I excluded it because A) it wasn’t about multicellular life, which, let’s face it, is much more relevant and interesting, and B) I believe it happened over such a long amount of time as to be not worth considering on the same scale as the others.
(I was going to jokingly call these “animal x-risks”, but figured that might confuse people about that the point of the talk was.)
The End-Ediacaran extinction
We don’t know much about Precambrian life, but it’s known as the “Garden of Ediacara” and seems to have been a peaceful time.
The Ediacaran sea floor was covered in a mat of algae and bacteria, and ‘critters’ – some were definitely animals, others we’re not sure – ate or lived on the mats. There were tunneling worms, the limpets, some polyps, and the sand-filled curiosities termed “vendozoans”. They may have been single enormous cells like today’s xenophylophores, with the sand giving them structural support. The fiercest animal is described as a “soft limpet” that eats microbes. They don’t seem to have had predators, and this period is sometimes known as the “Garden of Ediacara”. (1)
At 542 million years ago, something happens – the Cambrian explosion. In a very short 5 million years, a variety of animals evolve in a short window.
Molluscs, trilobites and other arthropods, a creative variety of worms eventually including the delightful Hallucigenia, and sponges exploded into the Cambrian. They’re faster and smarter than anything that’s ever existed. The peaceful Ediacaran critters are either outcompeted or gobbled up, and vanish from the fossil record. The first shelled animals indicate that predation had arrived, and that the gates of the Garden of Ediacara had closed forever.
The end-Devonian extinction
Jump forward a few million years – 50% of genuses go extinct. Marine species suffered the most in this event, probably due to anoxia.
There’s an unexpected possible culprit – plants around this time made a few evolutionary leaps that began the first forests. Suddenly a lot of trees pumping oxygen into the air lead to global cooling, and large amounts of soil lead to nutrient-rich runoff, which lead to widespread marine anoxia which decimates the ocean.
We do know that there were a series of extinction events, so forests were probably only a partial cause. The longer climate trend around the extinction was global warming, so the yo-yoing temperature (from general warming and cooling from plants) likely contributed to extinction. (2) It’s strange to think that the land before 375 million years ago didn’t have much in the way of soil – major root structures contributed to rock wearing away. Plus, once you have some soil, and once the first trees die and contribute their nutrients, you get more soil and more plants – a positive feedback loop.
The specific trifecta of evolutions that let forests take over land: significant root structures, complex vascular systems, and seeds. Plants prior to this were small, lichen-like, and had to reproduce in water. (3)
The Permian-Triassic extinction
96% of marine species go extinct. Most of this happens in a 20,000 year window, which is nothing in geologic time. This is the largest and most sudden prehistoric extinction known.
The cause of this one was confusing for a long time. We know the earth got warmer, or maybe cooler, and that volcanoes were going off, but the timing didn’t quite match up.
Volcanoes were going off for much longer than the extinction, and it looks like die-offs were happening faster than we’d expect from increasing volcanism, or standard climate change cycles. (4) One theory points out that die-offs line up with exponential or super-exponential growth, as in, from a replicating microbe. Remember high school biology?
One theory suggests Methanosarcina, an archaea that evolved the chemical process that turned organic carbon into methane around the same time. Remember those volcanoes? They were spewing enormous amounts of nickel – an important co-factor for that process.
(Methanosarcina appeared to have gotten the gene from a cellulose-digesting bacteria – definitely a neat trick. (5) )
The theory goes that Methanosarcina picked up its new pathway, and flooded the atmosphere with methane, which raised the surface temperature of the oceans to 45 degrees Celsius and killed most life. (2)
This report is a little recent, and it’s certainly unique, so I don’t want to claim that it’s definitely confirmed, or sure on the same level that, say, the Chicxulub impact theory is confirmed. That said, at the time of this writing, the cause of the Permian-Triassic extinction is unclear, and the methanogen theory doesn’t seem to have been majorly criticized or debunked.
Quaternary and Anthropocene extinctions
Finally, I’m going to combine the Quaternary and Anthropocene events. They don’t show up on this chart because the data’s still coming in, but you know the story – maybe you’re an ice-age megafauna, or rainforest amphibian, and you are having a perfectly fine time, until these pretentious monkeys just walk out of the Rift Valley, and turn you into a steak or a corn farm.
Because of humans, since 1900, extinctions have been happening at about a thousand times the background rate.
(Looking at the original chart, you might notice that the “background” number of extinctions appears to be declining over time – what’s with that? Probably nothing cosmic – more recent species are just more likely to survive to the present day.)
Impacts from evolutionary innovation
You can probably see a common thread by now. These extinctions were caused – at least in part – by natural selection stumbling upon an unusually successful strategy. Changing external conditions, like nickel from volcanoes or other climate change, might contribute by giving an edge to a new adaptation.
In some cases, something evolved that directly competed the others – biotic replacement
In others, something evolved that changed the atmosphere.
I’m going to throw in one more – that any time a species goes extinct due to a new disease, that’s also an evolutionary innovation. Now, as far as we can tell, this is extremely rare in nature, but possible. (7)
Are humans at risk from this?
From natural risk? It seems unlikely. These events are rare and can take on the order of thousands of years or more to unfold, at which point we’d likely be able to do something about it.
That is, as far as we know – the fossil record is spotty. As far as I can tell, we were able to pin the worst of the Permian-Triassic extinction down to 20,000 years only because that’s how narrow the resolution on the fossil band formed at the time was. It might have actually been quicker.
Even determining if an extinction has happened or not, or if the rock just happened to become less good at holding fossils, is a struggle. I liked this paper not really for the details of extinction events (I don’t think the “mass extinctions are periodic” idea is used these days), but for the nitty gritty details of how to pull detailed data out of rocks.
That said, for calibrating your understanding, it seems possible that extinctions from evolutionary innovation are more common than mass extinctions involving asteroids (only one mass extinction has been solidly attributed to an asteroid: the Chicxulub impact that ended the reign of dinosaurs.) That’s not to say large asteroid impacts (bolides) don’t cause smaller extinctions – but one source estimated the bolide:extinction ratio to be 175:1. (2)
Plus, having a brain matters, and I think I can say it’s really unlikely that a better predator (or a new kind of plant) is going to evolve without us noticing. There are some parallels here with, say, artificial intelligence risk, but I think the connection is tenuous enough that it might not be useful.
If we learn that such an event is happening, it’s not clear what we’d do – it depends on specifics.
But consider synthetic biology – the thing where we design new organisms and see what happens. As capabilities expand, should we worry about lab escapes on an existential scale? I mean, it has happened in nature.
Evolution has spent billions of years trying to design better and better replicators. And yet, evolutionary innovation catastrophes are still pretty rare.
That said, people have a couple of advantages:
We can do things on purpose. (I mean, a human working on this might not be trying to make a catastrophic geoweapon – but they might still be trying to make a really good replicator.)
We can come up with entirely new things. When natural selection innovates, every incremental step on the way to the final result has to an improvement on what came before. It’s like if you tried to build a footbridge, but at every single step of building it, it had to support more weight than before. We don’t have those constraints – we can just design a bridge and then build it and then have people walk across it. We can design biological systems that nobody has seen before.
This question of if we can design organisms more effective than evolution is still open, and crucial for telling us how concerned we should be about synthetic organisms in the environment.
People are concerned about synthetic biology and the risk of organisms “escaping” from a lab, industrial setting, or medical setting into the environment, and perhaps persisting or causing local damage. They just don’t seem to be worried on an existential level. I’m not sure if they should be, but it seems like the possibility is worth considering.
For instance, a company once almost released large quantities of an engineered bacteria that turned out to produce soil ethanol in large enough quantities to kill all plants in a lab microcosm. It appears that we don’t have reason to think it would have outcompeted other soil biota and actually caused an existential or even a local catastrophe, but it was caught at the last minute and the implications are clearly troubling. (9)
Natural Die-offs of Large Mammals: Implications for Conservation I’m pretty sure I’ve seen at least a couple other sources mention this, but can’t find them right now. I had Chytridiomycosis in mind as well. This seems like an important research project and obviously has some implications for, say, biology existential risk.
The drive from Seattle to San Francisco along I-5 is a 720-mile panorama of changing biomes. Forest, farmland, and the occasional big city get very gradually drier, sparser, flatter. You pass a sign for the 45th parallel, marking equidistance between the equator and the North Pole. Then the road clogs with semis chugging their way up big craggy hills, up and up, and then you switch your foot from the gas to the brake and drop down the hills into more swathes of farmland, and more intense desert, with only the very occasional tiny town to get gas and bottles of cold water. Eventually, amid the dry hills, you see the first alien tower of a palm tree, and you know the desert is going to break soon.
Of course, I like the narrative arc on the drive back even better. Leaving Berkeley in the morning, you hit the desert in its element – bright and dry – without being too hot. That comes later, amid the rows and rows of fruit and nut trees, which turns into the mountains again, and into the land on the side of the mountains, now dominated by lower bushy produce crops and acres of flat grain land. You pass a sign for Lynn County, the Grass Seed Capital of the US. Finally, well into dusk, you hit the Washington border, and the first rain you’ve seen on the entire trip starts falling right on cue. Then you meet some friends in your old college town for a quick sandwich and tomato soup at 11:30 PM, and everything is set right with the world, letting you arrive back home by an exhausted but satisfied 1:30 AM.
I like this drive for giving a city kid a slice of agriculture. I’ve written about the temporal scale of developments in agriculture, but the spatial scale is just as incredible. About 50% of land in the US is agricultural. Growing the calorie-dense organisms that end up on my plate, or fueling someone’s car, or exported onto someone else’s plate, or someone else’s feedbag, is the result of an extraordinary amount of work and effort.
I talked about the plants – there’s trees for fruit and nuts, vines, grain, corn, a million kinds of produce. I only assume this gets more impressive when you go south from San Francisco. (In recent memory, I’ve only visited as far south as Palo Alto, and was shocked to discover a lemon tree. With lemons on it! In December! Who knew? Probably a lot of you.)
There’s also animals – aside from a half dozen alpacas and a few dozen horses, you spot many sheep and many, many cows from the highway. The cattle ranches were quite pretty and spacious – I wonder if this is luck, or if there’s some kind of effort to put the most attractive ranches close to the highway. Apparently there are actual feedlots along I-5 if you keep going south. I certainly didn’t notice any happy chicken farms along the way.
And then there are the bees.
Bees are humanity’s most numerous domesticated animal. You don’t see them, per se, since they are, well, bees. What you can see are the hives – stacks of white boxes like lost dresser drawers congregating in fields. Each box contains the life’s work of a colony of about 19,200 bees.
The boxes look like this. The bees look like this.
Bees are enormously complicated and fascinating insects. They live in the densely packed hives described above, receiving chemical instructions by one breeding queen, and eusocially supporting her eggs that become the next generation of the hive. In the morning, individual bees leave the hive, fly around, and search for pollen sources, which they shove into pouches on their legs. Returning, if they’ve located a juicy pollen source, they describe it to other bees using an intricate physical code known as the waggle dance.
What images of this don’t clearly show is that in normal circumstances, this is done inside the hive, under complete darkness, surrounded by other bees who follow it with their antennae.
The gathered pollen is used to sustain the existing bees, and, of course, create honey – the sugar-rich substance that feeds the young bee larvae and the hive through winter. Each “drawer” of the modern Langstroth beehive – seen above – contains ten wooden frames, each filled in by the bees with a wax comb dripping with honey. At harvesting time, each frame is removed from the hive, the carefully placed wax caps covering each honey-filled comb are broken off, and the honey is extracted via centrifuge. (More on the harvesting practice.)
Each beehive makes about 25 pounds of harvestable honey in a season, and each pound of honey represents 55,000 miles flown by bees. Given the immense amount of animal labor put into this food, I want to investigate the claim that purchasing honey is a good thing from an animal welfare perspective.
I’m not about to say that people who care about animal welfare should be fine eating honey because bees don’t have moral worth, because I suspect that’s not true. I suspect that bees can and do suffer, and at the very least, that we should consider that they might. The capacity to suffer is evolutionary – it’s an incentive to flee from danger, learn from mistakes, and keep yourself safe when damaged. Bees have a large capacity to learn, remember, and exhibit altered behavior when distressed.
Like other social insects, however, bees also do a few things that contraindicate suffering in most senses, like voluntarily stinging invaders in a way that tears out some internal organs and leaves them at high risk of death. In addition, insects possibly don’t feel pain at the site of an injury (though I’m not sure how well studied this is over all insects) (more details). They may feel some kind of negative affect distinct from typical human pain. In any case, it seems like bee welfare is possibly important, and since there are 344,000,000,000,000 of them under our direct care, I’m inclined to err on the side of “being nice to them” lest we ignore an ongoing moral catastrophe just because we didn’t think we had incontrovertible proof at the time.
This is harder than it sounds, because of the almonds.
The beehives I saw on on I-5 don’t live there full-time. They’re there because of migratory beekeepers, who load hives into trucks and drive them all over the country to different fields of different crops. As we were all told in 3rd grade, bees are important pollinators, and while the fields of old were pollinated with a mix of wild insects and individually-managed hives, like other animal agriculture, the bees of today are managed on an industrial scale.
(We passed at least one truck that was mostly covered with a tarp, but had distinctive white boxes visible in the corners. I’m pretty sure that truck was full of bees.)
60-75% of the US’s commercial hives congregate around Valentine’s Day in the middle of California to pollinate almonds. When we say bees are important pollinators, one instance of this is that almonds are entirely dependent on bees – every single almond is the result of an almond tree flower pollinated by a bee. California grows 82% of the world’s almonds.
90% of apple, avocado, blueberry, cranberry, asparagus, broccoli, carrot, cauliflower, onion, vegetable seed, legume seed, rapeseed, and sunflower pollination.
80%+ of cherry, kiwifruit, macadamia nut, celery, and cucumber pollination
70%+ of grapefruit, cantaloupe, and honeydew pollination.
60%+ of pear, plum, apricot, watermelon, and alfalfa seed and hay (a major food source for cattle) pollination.
40%+ of tangerine, nectarine, and peach pollination.
5-40% of pollination for quite a few other crops.
Our agricultural system, and by extension, the food you eat is, in huge part, powered by those 344 trillion bees. Much of this bee power is provided by migratory beekeepers. In total, beekeepers in the US make about 30% of their money from honey, and 70% from renting out their bees for pollination.
Sidenote: All of the honey bees kept in the US are one species. (There are also 3000 wild bee species, as well as wild honey bees.) So we’re putting all of our faith in them. If you haven’t been living under a rock for the last decade, you may have heard of colony collapse disorder, which I’d wager is the kind of thing that becomes both more likely and more catastrophic when your system is built on an overburdened monoculture.
Does this mean you actively should eat honey? I really don’t know enough about economics to say that or not. If you’re averse to using animal products, I don’t believe you’re obligated to eat honey – there are many delicious products that do what honey does, from plain sugar to maple syrup to agave to vegan honey.
But if you don’t eat honey and tell other people not to eat honey, I imagine you’re doing that because of a belief that this will lead to fewer bees being brought into existence and used by humans. And if you have this belief that it’s better to have fewer bees used by humans, I’m very curious what you think they’ll be replaced with. What if you want to reduce the amount of suffering comprised by honeybees in your diet, or in agriculture in general?
One thing people have thought of is encouraging pollination by wild bees and other insects. When thinking about the volume of honeybees you’d need to replace, though, you start to encounter real ethical questions about the welfare of those wild bees. Living in the wild as an insect is plausibly pretty nasty. (I don’t have the evidence either way on whether honey bees or wild bees have better lives – but that if you care about honey bees anyway, it bears considering that this would require humans replacing the huge number of honey bees with other life forms, and that the fact that they’d be living on their own in hedges next to a field, rather than in a wooden hive, doesn’t automatically mean they’ll be happier.)
You could eat crops that aren’t mostly pollinated by honeybees. This page lists some – a lot of vegetables make the list. Grains, cereals, and grasses also tend be wind-pollinated.
Beekeeping seems like it might be better than increasing the number of wild pollinators, but migratory beekeeping as a practice reduces bee lifespans, and increases stress markers and parasites compared to stationary hives. Reducing the amount of travel modern hives do might be helpful. Maybe we could just stop growing almonds?
(Although that still leaves us with the problem of apple, asparagus, avocado, blueberry, broccoli, carrot, cauliflower, cranberry, carrot, onions, rapeseed, sunflowers, vegetable seeds, legume seeds, rapeseed, sunflowers…)
It also seems completely possible to raise beehives that are only used for pollination and not honey. This still requires animal labor and more individual bees, but the bees would have less stressful lives.
Stygiomedusa gigantea was discovered around Antarctica, and has been spotted about once per year over the past century. It’s one meter in diameter, and its tentacles are up to 10 meters long. It’s apparently sometimes known as the “guardian of the underworld”. Image from a Monterey Bay Aquarium Research Institute ROV.
The Sentience Politics research agenda (plus supplementary documents for my pieces) is here.
Sentience Politics describes itself: “Sentience Politics is an antispeciesist political think tank. We advocate for a society in which the interests of all sentient beings are considered, regardless of their species membership, and we rigorously analyze the evidence to assess and pursue the most effective ways to help all sentient beings. Among other activities, we organize political initiatives, publish scientific policy papers, and host conferences to bring forward-thinking minds together to address the major sources of suffering in the world.” I think their work is valuable and recommend checking them out.
You may not be aware that I have an about page. If you want to commission me to do some research for you, or have suggestions for future posts, let me know.
If anyone has suggestions for ways to make an online dichotomous key, let me know. (Workflowy has been suggested, but I don’t think it’s flexible enough to make a nice-looking large dichotomous key with a lot of options.)
I’m planning on looking through old posts and updating them factually, or at least adding a disclaimer on top to reflect any information I no longer suspect is accurate.
Sometimes, the more I know about a topic, the less skeptical I am about new things in that field. I’m expecting them to be weird.
One category is deep sea animals. I’ve been learning about them for a long time, and when I started, nearly anything could blow my mind. I’d look up sources all the time because they all sounded fake. Even finding a source, I’d be skeptical. There’s no reason for anyone to photoshop that many pictures of that sea slug, sure, but on the other hand, LOOK AT IT.
Nowadays, I’ve seen even more deep sea critters, and I’m much less skeptical. I think you could make up basically any wild thing and I’d believe it. You could say: “NOAA discovered a fish with two tails that only mates on Thursdays.” Or “National Geographic wrote about this deep-sea worm that’s as smart as a dog and fears death.” And I’d be like “yeah, that seems reasonable, I buy it.”
Here’s a test. Five of these animals are real, and three are made up.
A jellyfish that resembles a three-meter-diameter circular bedsheet
A worm that, as an adult, has no DNA.
A worm that branches as it ages, leaving it with one head but hundreds of butts.
A worm with the body plan of a squid.
A sponge evolved to live inside of fish gills.
A sea slug that lives over a huge geographic region, but only in a specific two-meter wide range of depth.
A copepod that’s totally transparent at some angles, and bright blue from others.
A shrimp that shuts its claws so fast it creates a mini sonic boom.
(Answers at bottom of page. Control-F “answers” to jump there.)
Of course, I’m only expecting to be surprised about information in a certain sphere. If you told me that someone found a fish that had a working combustion engine, or spoke German, I’d call bullshit – because those things are clearly outside the realm of zoology.
Still, there’s stuff like this. WHY ARE YOU.
Some other categories where I have this:
Modern American politics
Florida Man stories
Head injury symptoms/aftermath
Places extremophiles live
Note that these aren’t cases where I tend to underapply skepticism – these are cases where, most of the time, not being skeptical works. If people were making up fake Florida Man stories, I’d have to start being skeptical again, but until then, I can rely on reality being stranger than I expect.
What’s the deal? Well, a telling instance of the phenomena, for me, is archaeal viruses.
Some of these viruses are stable and active in 95° C water.
This archaeal virus is shaped like a wine bottle.
This one is shaped like a lemon.
This one appears to have evolved independently and shares no genes with other viruses.
This one builds seven-sided pyramids on the surfaces of cells it infects.
These are really surprising to me because I know a little bit about viruses. If you know next to nothing about viruses, a lemon-shaped virus probably isn’t that mind-blowing. Cells are sphere-shaped, right? A lemon shape isn’t that far from a sphere shape. The ubiquitous spaceship-shaped T4 is more likely to blow your mind.
Similarly, if you were a planet-hopping space alien first visiting earth, and your alien buddy told you about the giant garbage-bag shaped jellyfish, that probably wouldn’t be mind-blowing – for all you know, everything on earth looks like that. All information in that category is new to you, and you don’t have enough context for it to seem weird yet.
At the same time, if I studied archaeal viruses intensely, I’d probably get a sense of the diversity in the field. Some strange stuff like the seven-sided pyramids would still come along as it’s discovered, but most new information would fit into my models.
This suggests that for certain fields, there’s going to be some amount of familiarity where I’m surprised by all sorts of things, but on the tail ends, I either don’t know enough to be surprised – or already know everything that might surprise me. In the middle, I have just enough of a reference class that it frequently gets broken – and I end up concluding that everything is weird.
Whether viruses are alive or not is a silly question. Here’s why.
(I make a handful of specific claims here that I expect are not universally agreed upon. In the spirit of tagging claims and also as a TL;DR, I’ll list them.)
Whether things are alive or not is a categorization issue.
The criteria that living organisms should be made of cells is a bad one, even excluding viruses.
Some viruses process energy.
A virus alone may not process energy, but a virus-infected cell does, and meets all criteria for life.
Viruses are not an edge case in biology, they’re central to it.
The current criteria for life seem to be specifically set up to exclude viruses.
What does it mean to be alive?
Whether viruses are alive is a semantic issue. It isn’t a question about reality, in the same way that “how many viruses are there?” or “do viruses have RNA?” are questions about reality. It’s a definitional question, and whether they fall in the territory of “alive” or not depends on where you draw the borders.
Fortunately, scientists tentatively use a standard set of borders. This is not exactly set in stone, but it’s an outset. In intro biology in college, I learned the following 7 characteristics (here, copied from Wikipedia)*:
Homeostasis: regulation of the internal environment to maintain a constant state; for example, sweating to reduce temperature
Organization: being structurally composed of one or more cells — the basic units of life
Metabolism: transformation of energy by converting chemicals and energy into cellular components (anabolism) and decomposing organic matter (catabolism). Living things require energy to maintain internal organization (homeostasis) and to produce the other phenomena associated with life.
Growth: maintenance of a higher rate of anabolism than catabolism. A growing organism increases in size in all of its parts, rather than simply accumulating matter.
Adaptation: the ability to change over time in response to the environment. This ability is fundamental to the process of evolution and is determined by the organism’s heredity, diet, and external factors.
Response to stimuli: a response can take many forms, from the contraction of a unicellular organism to external chemicals, to complex reactions involving all the senses of multicellular organisms. A response is often expressed by motion; for example, the leaves of a plant turning toward the sun (phototropism), and chemotaxis.
Reproduction: the ability to produce new individual organisms, either asexually from a single parent organism or sexually from two parent organisms.
The simple answer
Viruses meet all of the criteria for living things, except 2) and maybe 3).
The complicated answer
For the complicated answer, let’s go a level deeper.
Simply put, criterion 2) states that living things must be made of cells.
Criterion 3) states that living things must metabolize chemical energy in order to power their processes.
Are viruses made of cells?
Okay, here’s what I’ve got. I think 2) is a bad criterion. I think that criteria for living things should not be restricted to earth *, and therefore not restricted to our phylogenetic history. Cells are a popular structure on earth, but if we go to space and find large friendly aliens that are made of proteins, reproduce, evolve, and have languages, we’re not just going to call them “non-living” because they run on something other than cells. Even if that definition is useful up until that point, we’d change it after we found those aliens, suggesting that it wasn’t a good criterion in the first place either.
(Could large aliens not be made out of cells? Difficult to say – multicellularity has been a really, really popular strategy here on earth, having evolved convergently at least 25 times. But cells as we know them only evolved once or twice. Also, it’s not clear to what degree convergent evolution applies to things outside of our particular evolutionary history, because n=1.)
So no, viruses don’t meet criterion 2), although the importance of criterion 2) is debatable.
Do viruses process energy?
What about criterion 3)? Do viruses process energy? Kind of.
Let’s unpack “processing energy.” Converting one kind of chemical energy to another is pretty generic. In bacteria and eukaryotes, what does that look like?
Go ahead. Enlarge it. Look around. Contemplate going into biochemistry. Here’s where it starts to get complicated.
One of the major energy sources in cells is converting adenosine triphosphate (ATP) into adenosine diphosphate (ADP). This transformation powers so much cellular processes in all different organisms that it’s called the currency of life.
Bacteriophage T4 encodes an ATP→ADP-powered motor. It’s used during the virus’ reproduction, to package DNA inside nascent virus heads.
Some viruses of marine cyanobacteria encode various parts of the electron transport chain, the series of motors that pump protons across membranes and create a gradient that results in the synthesis of ATP. They encode these as a sort of improvement on the ones already present in the hosts.
Do those viruses process chemical energy? Yes. If you’re not convinced, ask yourself: Is there some other pathway you’d need to see before you consider a virus to encode a metabolism? If so, are you absolutely certain that we will never find such a virus? I don’t think I would be.
Wait, you may say. Sure, the viruses encode those and do those when infecting a host. But the viruses themselves don’t do them.
To which I would respond: A pathogenic bacterial spore is, basically, metabolically inert. If it nestles into a warm, nutrient-rich host, it blossoms into life. Our understanding of living things includes a lot of affordance for stasis.
By the same token, a virus is a spore in stasis. A virus-infected cell meets all the criteria of life.
(I think I heard this idea from Lindsay Black’s talk at the 2015 Evergreen Bacteriophage meeting, but I might be misremembering. The scientists there seemed very on-board with the idea, and they certainly have another incentive to claim that their subjects are alive, which is that studying living things sounds cooler than studying non-living things – but I think the point is still sound.)
Do we really want only some viruses count as alive?
To summarize, cells infected by T4 or some marine cyanophages – and probably other viruses – meets all of the criteria of life.
It seems ridiculous to include only those viruses in the domain of ‘life’, and not others that don’t include those chemical processes. Viruses have phylogeny. Separating off some viruses that are alive and some that aren’t is pruning branches off of the the evolutionary tree. We want a category of life that carves nature at its joints, and picking only some viruses does the opposite of that.
Wait, it gets more complicated. Some researchers have proposed giant viruses as a fourth domain of life (alongside the standard prokaryotes, eukaryotes, and archaea.) You’ll note that it’s giant viruses, and not all the viruses. That’s because viruses probably aren’t monophyletic. Hyperthermophilic crenarchaea phages, in addition to being a great name for your baby, share literally no genes with any other virus. Some other viruses have only extremely distant genetic similarities to others, which may have been swapped in by accident during past infections. This is not terribly surprising – we know that parasites have convergently evolved perhaps thousands of times. But it certainly complicates the issue of where to put viruses in the tree.
Viruses are not just an edge case
When people talk about the criteria of life, they tend to consider viruses as an edge case, a weird outlier. This is misleading.
Worldwide, viruses outnumber cells 10 times over. They’re not an edge case in biology – by number of individuals, or amount of ongoing evolution, they’re most of biology. And it’s rather suspicious that the standard criteria for life seem to be set up to include every DNA-containing evolving organism except for viruses. If we took out criteria 2) and 3), what else would that fold in? Maybe prions? Anything else?
Accepting that ‘life’ is a word that tries to draw out a category in reality, why do we care about that category? When we ask “is something alive?”, here are some questions we might mean instead.
Is something worth moral consideration? (Less than a bacteria, if any.)
Should biologists study something? (A biologist is much more suited to study viruses than a chemist is.)
Does something fit into the tree of life? (Yes.)
If we find something like it on another planet, should we celebrate? (Yes, especially because a parasite has to have a host nearby.)
When we think of viruses – fast moving, promiscuous gene-swappers, picking up genes from both each other and their hosts, polyphyletic, here from the beginning – I think of a parasitic vine weaving around the tree of life. It’s not exactly an answer, but it’s a metaphor that’s closer to the truth.
* Carl Sagan’s definition of life, presented to and accepted by a committee at NASA, is “a self-sustaining chemical system capable of Darwinian evolution.” This nicer, neater definition folds in viruses, prions, and aliens. The 7-point system is the one I was taught in college, though, so I’m writing about that.
We live in a rather pleasant time in history where biotechnology is blossoming, and people in general don’t appear to be using it for weapons. If the rest of human existence can carry on like this, that would be great. In case it doesn’t, we’re going to need back-up strategies.
Here, I investigate some up and coming biological innovations with a lot of potential to help us out here. I kept a guiding question in mind: will biosecurity ever be a solved problem?
If today’s meat humans are ever replaced entirely with uploads or cyborg bodies, biosecurity will be solved then. Up until then, it’s unclear. Parasites have existed since the dawn of life – we’re not aware of any organism that doesn’t have them. When considering engineered diseases and engineered defenses, we’ve left the billions-of-years-old arms race for a newer and faster paced one, and we don’t know where an equilibrium will fall yet. Still, since the arrival of germ theory, our species has found a couple broad-spectrum medicines that have significantly reduced threat from disease: antibiotics and vaccines.
What technologies are emerging now that might fill the same role in the future?
What it is: Viruses that attack and kill bacteria.
What it works against: Bacteria.
How it works: Bacteriophage are bacteria-specific viruses that have been around since, as far as we can tell, the dawn of life. They occur frequently in nature in enormous variety – it’s estimated that for every bacteria on the planet, there are 10 phages. If you get a concentrated stock of bacteriophage specific to a given bacteria, they will precisely target and eliminate that strain, leaving any other bacteria intact. They’re used therapeutically in humans in several countries, and are extremely safe.
Biosecurity applications: It’s hard to imagine even a cleverly engineered bacteria that’s immune to all phage. Maybe if you engineered a bacteria with novel surface proteins, it wouldn’t have phage for a short window at first, but wait a while, and I’m sure they’ll come. No bacteria in nature, as far as we’re aware, is free of phage. Phage have been doing this for a very, very long time. Phage therapy is not approved for wide use in the US, but has been established as being safe and quite effective. A small dose of phage can have powerful impacts on infection.
Current constraints: Lack of research. Very little current precedent for using phage in the US, although this may change as researchers hunt for alternatives to increasingly obsolete antibiotics.
Choosing the correct phage for therapeutics is something of an art form, and phage therapy tends to work better against some kind of infection than others. Also, bacteria will evolve resistance to specific phages over time – but once that happens, you can just find new phages.
What it works against: Viruses. (Specifically, double-stranded RNA, single-stranded RNA, and double-stranded DNA (dsRNA, ssRNA, and dsDNA), which is most human viruses.)
How it works: DsDNA, dsRNA, and ssRNA virus-infected cells each produce long sequences of double-stranded RNA at some point while the virus replicates. Human cells make dsRNA occasionally, but it’s quickly cleaved into little handy little chunks by the enzyme Dicer. These short dsRNAs then go about, influencing translation of DNA to RNA in the cell. (Dicer also cuts up incoming long dsRNA from viruses.)
DRACO is a fusion of several proteins that, in concert, goes a step further than Dicer. It has two crucial components:
P that recognizes/binds viral sequences on dsRNA
P that triggers apoptosis when fused
Biosecurity applications: The viral sequences it recognizes are pretty broad, and presumably, it wouldn’t be hard to generate addition recognition sequences for arbitrary sequences found in any target virus.
Current constraints: Delivering engineered proteins intracellularly is a very new technology. We don’t know how well it works in practice.
DRACO, specifically, is extremely new. It hasn’t actually been tested in humans yet, and may encounter major problems in being scaled up. It may be relatively trivial for viruses to evolve a means of evading DRACO. I’m not sure that it would be trivial for a virus to not use long stretches of dsRNA. It could, however, evolve not to use targeted sequences (less concerning, since new targeting sequences could be used), inactivate some part of the protein (more concerning), or modify its RNA in some way to evade the protein. Even if resistance is unlikely to evolve on its own, it’s possible to engineer resistant viruses.
On a meta level, DRACO’s inventor made headlines when his NIH research grant ran out, and he used a kickstarter to fund his research. Lack of funding could end this research in the cradle. On a more meta level, if other institutions aren’t leaping to fund DRACO research, experts in the field may not see much potential in it.
Programmable RNA vaccines
What it is:RNA-based vaccines that are theoretically creatable from just having the genetic code of a pathogen.
What it works against: Just about anything with protein on its outside (virus, bacteria, parasite, potentially tumors.)
How it works: An RNA sequence is made that codes for some viral, bacterial, or other protein. Once the RNA is inside a cell, the cell translates it and expresses the protein. Since it’s not a standard host protein, the immune system recognizes and attacks it, effectively creating a vaccine for that molecule.
The idea for this technology has been around for 30-odd years, but the MIT team that discovered this were the first to package the RNA in a branched, virus-shaped structure called a dendrimer (which can actually enter and function in the cell.)
Biosecurity applications: Sequencing a pathogen’s genome should be quite cheap and quick once you get a sample of it. An associate professor claims that vaccines could be produced “in only seven days.”
Current constraints: Very new technology. May not actually work in practice like it claims to. Might be expensive to produce a lot of it at once, like you would need for a major outbreak.
What it is: Compounds that are especially effective at destroying viruses at some point in their replication process, and can be taken like other drugs.
What it works against: Viruses
How it works: Conventional antivirals are generally tested and targeted against specific viruses.
The class of drugs called thiazolides, particularly nitazoxanide, is effective against not only a variety of viruses, but a variety of parasites, both helminthic (worms) and protozoan (protists like Cryptosporidum and Giardia.) Thiazolides are effective against bacteria, both gram positive and negative (including tuberculosis and Clostridium difficile). And it’s incredibly safe. This apparent wonderdrug appears to disrupt creation of new viral particles within the infected cell.
There are others, too. For instance, beta-defensin P9 is a promising peptide that appears to be active against a variety of respiratory viruses.
Biosecurity applications: Something that could treat a wide variety of viruses is a powerful tool against possible threats. It doesn’t have to be tailored for a particular virus- you can try it out and go.
Also, using a single compound drastically increases the odds that a virus will evolve resistance. In current antiviral treatments, patients are usually hit with a cocktail of antivirals with different mechanisms of action, to reduce the chance of a virus finding resistance of them.
Space for finding new antivirals seems promising, but they won’t solve viruses any more than antibiotics have solved bacterial infections – which is to say, they might help a lot, but will need careful shepherding and combinations with other tactics to avoid a crisis of resistance. Viruses tend to evolve more quickly than bacteria, so resistance will happen much faster.
What it is: Genetically altering organisms to spread a certain gene ridiculously fast – such as a gene that drives the species to extinction, or renders them unable to carry a certain pathogen.
What it works against: Sexually reproducing organisms, vector-borne diseases (with sexually reproducing vectors.)
Biosecurity applications: Gene drives have been in the news lately, and they’re a very exciting technology – not just for treating some of the most deadly diseases in the world. To see their applications for biosecurity, we have to look beyond standard images of viruses and bacteria. One possible class of bioweapon is a fast-reproducing animal – an insect or even a mouse, possibly genetically altered, which is released into agricultural land as a pest, then decimates food resources and causes famine.
Another is release of pre-infected vectors. This has already been used as a biological weapon, including Japan’s infamous Unit 731, which used hollow shells to disperse fleas carrying the bubonic plague into Chinese villages. Once you have an instance of the pest or vector, you can sequence its genome, create a genetic modification, and insert the modification along with the gene drive sequences. This can either wipe the pest out, or make it unable to carry the disease.
Current constraints: A gene drive hasn’t actually been released into the wild yet. It may be relatively easy for organisms to evolve strategies around the gene drive, or for the gene drive genes to spread somehow. Even once a single gene drive, say, for malaria, has been released, it will probably have been under deep study for safety (both directly on humans, and for not catastrophically altering the environment) in that particular case – the idea of a gene drive released on short notice is, well, a little scary. We’ve never done this before.
Additionally, there’s currently a lot of objection and fears around gene drives in society, and the idea of modifying ecosystems and things that might come into contact with people isn’t popular. Due to the enormous potential good of gene drives, we need to be very careful about avoiding public backlash to them.
Finding the right modification to make an organism unable to carry a pathogen may be complicated and take quite a while.
Gene drives act on the pest’s time, not yours. Depending on the generation time of the organism, it may be quite a while before you can A) grow up enough of the modified organism to productively release, and B), wait while the organism replicates and spreads the modified gene to enough of the population to have an effect.
What it is: Concentrated stocks of antibodies similar to the ones produced in your own body, specific to a given pathogen.
What it works against: Most pathogens, some toxins, cancers.
How it works: Antibodies are proteins produced by B-cells as part of the adaptive immune system. Part of the protein attaches to a specific molecule that identifies a virus, bacteria, toxin, etc.. The rest of the molecule acts as a ‘tag’ – showing other cells in the adaptive immune system that the tagged thing needs dealt with (lysed, phagocytosed, disposed of, etc.)
Biosecurity applications: Antibodies can be found and used therapeutically against a huge variety of things. The response is effectively the same as your body’s, reacting as though you’d been vaccinated against the toxin in question, but it can be successfully administered after exposure.
Current constraints: Currently, while therapeutic antibodies are used in a few cases like snake venom and tumors, they’re extremely expensive. Snake antivenom is taken from the blood serum of cows and horses, while more finicky monoclonal therapeutics are grown in tissue culture. Raising entire animals for small amounts of serum is pricey, as are the nutrients used for tissue culture.
One possible answer is engineering bacteria or yeast to produce antibodies. These could grow antibodies faster, cheaper, and more reliably than cell culture. This is under investigation – E. coli doesn’t have the ability to glycosylate proteins correctly, but that can be added in with genetic engineering, and anyways, yeasts can already do that. The promise of cheap antibody therapy is very exciting, and more basic research in cell biology will get us there faster.
This post has also been published on the Global Risk Research Network, a group blog for discussing risks to humanity. Take a look if you’d like more excellent articles on global catastrophic risk.]
Several times in evolutionary history, the arrival of an innovative new evolutionary strategy has lead to a mass extinction followed by a restructuring of biota and new dominant life forms. This may pose an unlikely but possible global catastrophic risk in the future, in which spontaneous evolutionary strategies (like new biochemical pathways or feeding strategies) become wildly successful, and lead to extreme climate change and die-offs. This is also known as a ‘biotic replacement’ hypothesis of extinction events.
Biotic replacement in past extinctions
Is this still a possible risk?
Risk factors from climate change and synthetic biology
The shape of the risk
Identifying specific causes of mass extinction events may be difficult, especially since mass extinctions tend to be quickly followed by expansion of previously less successful species into new niches. A specific evolutionary advantage might be considered as the cause when either no other major physical disruptions (asteroids, volcanoes, etc) were occurring, or when our record of such events doesn’t totally explain the extinctions.
1. Biotic replacement in past extinctions
There are five canonical major extinction events that have occurred since the evolution of multicellular life. Biotic replacement has been hypothesized as either the major mechanism for two of them: the late Devonian extinction and the Permian-Triassic extinction. I outline these, as well as four other extinction events.
Cyanobacteria became the first microbes to produce oxygen (O2) as a waste product, and began forming colonies 200 million years before the extinction event. O2 was absorbed into dissolved iron or organic matter, and the die-off began when these naturally occurring oxygen sinks became saturated, and toxic oxygen began to fill the atmosphere.
The event was followed by die-offs, massive climate change, and permanent alteration of the earth’s atmosphere, and eventually the rise of the aerobic organisms.
The Ediacaran period was filled with a variety of large, autotrophic, sessile organisms of somewhat unknown heritage, known today mostly by fossil evidence. Recent evidence suggests that one explanation for this is the evolution of animals, able to move quickly and and re-shape ecosystems. This resulted in the extinction of Ediacaran biota, and was followed by the Cambrian explosion in which animal life spread and diversified rapidly.
Both modern plant seeds and modern plant vascular system developed in this period. Land plants grew significantly as a result, now able to more efficiently transport water and nutrients higher – with maximum heights changing from 30 cm to 30 m. Two things would have happened as a result:
The increase in soil content produced more weathering in rocks, which released ionic nutrients into rivers. The nutrient levels would have increased plant growth and then death in oceans, resulting mass anoxia.
Less atmospheric carbon dioxide would have cooled the planet.
96% of marine species, and 70% of land vertebrate species went extinct. 57% of families and 83% of general became extinct.
One hypothesis explaining the Permian-Triassic extinction events posits that an anaerobic methanogenic archaea, Methanosarcina, developed a new metabolic pathway allowing them to metabolize acetate into methane, leading to exponential reproduction and consuming vast amounts of oceanic carbon. Volcanic activity around the same time would have released large amounts of nickel, a crucial but rare cofactor needed for Methanosarcina’s enzymatic pathway.
The evolution of human intelligence and human civilization has lead to mass climate alteration by humans. Another set of adaptations among human society (IE agriculture, use of fossil fuels) could be considered here, but in terms of this hypothesis, the evolution of human intelligence and civilization could be considered to be the driving evolutionary innovation.
Minor extinction events
Any single species that goes extinct due to a new disease can be said to have become extinct due to another organism’s innovative adaptation. These are less well described as “biotic replacement”, because the new pathogen won’t be able to replace its extinct hosts, but it was still an evolutionary event that caused the disease. A new disease may also attack the sole or primary food source of an organism, leading to its extinction indirectly.
2. Is this still a possible risk?
It seems unlikely that all possible disruptive evolutionary strategies have already happened: Disruptive new strategies are rare – while billions of new mutations arise every day, any new gene must meet stringent criteria in order to spread: Is actually expressed, is passed on to progeny, immediately conveys a strong fitness benefit to its bearer, serves any vital function of the old version of the gene, is supported by the organism’s other genes and environment, and the organism isn’t killed by random chance before having the chance to reproduce. For instance, an unusually efficient new metabolic pathway isn’t going to succeed if it’s in a non-reproducing cell, if its byproducts are toxic to the host organism, if its host can’t access the food required for the process, or if its host happens to be born during a drought and starves to death anyways.
Environmental conditions that make a pathway more or less likely to be ridiculously successful, meanwhile, are constantly changing. Given the rareness of ridiculously successful genes, it seems foolhardy to believe that evolution up til now has already picked all low-hanging fruit.
How worried should we be? Probably, not very. The major extinction events listed above seem to be spaced by 100-200 million years, suggesting a 1-in-100,000,000 chance of occurring in any given year. For comparison, NASA estimates that asteroids causing major extinction events strike the earth every 50-100 million years. These threats are possibly on the same orders of magnitude.
(This number requires a few caveats: This is a high estimate, assuming that evolutionary advantages were a major factor in all cases. Also, an advantage that “starts” in one year may take millions of years to alter the biosphere or climate catastrophically. Once in 100 million years is also an average – there’s no reason to believe that disruptive evolutionary events, or asteroid strikes for that matter, occur on regular intervals.)
On a smaller scale, entire species are occasionally wiped out by a single disease. This is more likely to happen when species are already stressed or in decline. Data on how often this happens, or what fraction of extinctions are caused by a novel disease, is hard to find.
3. Risk factors from climate change and synthetic biology
Two risk factors are worth noting which may increase the odds of a biotic replacement event – climate change and synthetic biology.
Historically, a catastrophic evolutionary innovation seems to follow other massive climate disruption, as in the Permian-Triassic extinction explanation that followed volcanic eruptions. A change in conditions may select for innovative new strategies that quickly take over and produce much more disruption than the instigating geological event.
While the specific nature of the next disruptive evolutionary innovation may be nigh-impossible to predict, this suggests that we should give more credence to environment alteration as a threat – via climate change, volcanic eruptions, or asteroids – as changing environments will select for disruptive new alleles (or resurface preserved strategies.) This means that a minor catastrophic event could snowball into a globally catastrophic or existential threat.
The other emerging source of alleles as-of-yet unseen in the environment comes from synthetic biology, as scientists are increasingly capable of combining genes from distinct organisms and designing new molecular pathways. While genes crossing between wildly different organisms is not unheard of in nature, the increased rate at which this is being done in the laboratory, and the fact that an intentional hand is selecting for viability and novelty (rather than natural selection and random chance), both imply some cause for alarm.
A synthetic organism designed for a specific purpose, may disperse from its intended environment and spread widely. This is probably especially a risk for organisms using completely synthetic and novel pathways unlikely to have evolved in nature, rather than previously evolved genes – otherwise, the naturally occurring genes would have probably already seized low-hanging evolutionary fruit and expanded into possible niches.
4. The shape of the risk
How does this risk compare to other existential risks? It is not especially likely to occur, as described in Part 2. The precise shape or cause of the risk is harder to determine than, say, an asteroid strike. Also, as opposed to asteroid strikes or nuclear wars, which have immediate catastrophic effects, evolutionary innovations involve significant time delays.
Historically, two time delays appear to be relevant:
Time for the evolution to become widespread
Presumably, this is quicker in organisms that disperse/reproduce more quickly. EG, this could be fairly quickly for an oceanic bacteria with a quick generation cycle, but slowly for the 180,000 years it took between the first appearance of modern humans, and their eventual spread to the Americas.
Time between the organism’s dispersal and the induction of a catastrophe
EG, during the global oxygen crisis, it took 200 million years from the evolution of the species, to when the possible oxygen sinks filled up, for a crisis to occur. (At least some of this time included the period required for cyanobacteria to diversify and become commonplace.)
During the azolla event, azolla ferns accumulated for 800,000 years causing steady climate change. The modern threat from anthropogenic global warming is much steeper than that.
What are the actual threats to life?
The great oxygenation event and the Permian-Triassic extinction hypothesis involve the dispersal of a microbe that induces rapid, extreme climate change.
Other events such as volcanoes erupting may change the environment such that a new strategy becomes especially successful, as in the Permian-Triassic extinction event.
Faster, stronger, cleverer predation
The Ediacaran extinction event and the Holocene extinction event involved the dispersal of an unprecedentedly capable predator – animals and humans, respectively.
This seems unlikely to be a current risk. The risk from runaway artificial intelligence somewhat resembles this concern.
Death from disease
Any event in which a novel disease causes a species to go extinct has a direct impact. Additionally, a disease might cause one or more major food sources to go extinct (for humans or animals.)
Globalization and global trade has increased the risk of a novel disease spreading worldwide. This also mirrors current concerns over engineered bioweapons.
5. What next?
Disruptive evolutionary innovation is problematic in that there don’t appear to be clear ways of preventing it – evolution has been indiscriminately optimizing away for billions of years, and we don’t appear to be especially able to stop it. Building civilization-sustaining infrastructure that is more robust to a variety of climate change scenarios may increase our odds of surviving such a catastrophe. Additionally, any such disruptive event is likely to happen over a long period of time, meaning that we could likely mitigate or prepare for the worst effects. However, evolutionary innovation hasn’t been explored or studied as an existential risk, and more research is needed to clarify the magnitude of the threat, or which – if any – interventions are possible or reasonable to study now.
Questions for further study:
How common are extinction events due to disruptive evolutionary innovation?
What factors make these evolution events more likely?
How often do species go extinct due to single disease outbreaks?
Can small-scale models help us improve our understanding of the likelihood of global warming inducing “runaway” scenarios involving microbial evolution?
What man-made environmental changes could potentially lead to runaway microbial evolution?
First of all: It’s usually pronounced “pree-on.” If you say “pry-on”, people will probably still know what you mean.
This is an exploratory post on what prions are, and how they work, and a lot of other things I found interesting about them.
Primer on protein folding
Proteins are strings of amino acids produced from blueprints in DNA. Proteins run your cells, catalyze reactions, and do just about every important thing in the body.
A protein’s function is determined from its amino acid composition, and then mostly from its shape. A protein’s shape determines what other kind of molecules it can interact with, how it’ll interact with them, and everything it can do. One of the main reasons amino acid composition is important is because it determines how proteins can fold.
One string of amino acids can be folded into different shapes, which will have different properties. (The particular shape of a specific string of amino acids is called an isoform.)
While strings of amino acids will fold themselves into some kind of shape as they’re being made, they may also be folded later – into different or more complex shapes – elsewhere in the cell.
One of the things that can refold proteins is other proteins.
A prion is a protein that folds other, similar proteins into copies of itself. These new copies are very stable and difficult to unfold.
These copies can then go on and fold more proteins into more copies.
Some prion diseases
Prion diseases in animals appear to be mostly neurological. All known mammal prions are isoforms of a single nerve protein, PrP. They can both emerge on their own when the protein misfolds in the brain, or spread as an infectious agent.
Creutzfeldt-Jakob Disease affects one in one million people. (It’s also the most common modern prion disease. Prion diseases are very rare.) It comes in a variety of forms, but all have similar symptoms: depression, fatigue, dementia, hallucinations, loss of coordination, and other neurological symptoms, generally resulting in deaths a few months after symptoms start.
84-90% of cases are sporadic, meaning that the protein misfolds on its own. This mostly occurs in people older than 60.
10-15% of cases are familial, where a family carriers a gene that makes PrP likely to misfold.
>1% of cases are iatrogenic, meaning they occur as a result of hospital treatment. If medical care fucks up really badly, they might transplant organs from people with CJD, or inject people with growth hormone extracted from the pituitary glands of dead people, or even just use surgical tools once on CJD patients, and they catch it.
(The surgical tools one is really scary. Normal autoclaves – that operate well above the threshold needed to inactivate bacteria and viruses – kill some but not all prions. And while it takes a large dose of ingested prions before you’re likely to get sick, it takes 100,000 times less when exposure is brain-to-brain. Cleaning with “benzene, alcohol and formaldehyde” still doesn’t kill prions. The World Health Organization issued prion-specific instrument cleaning procedures in 1999- towards the end of Britain’s brush with bovine spongiform encephalopathy- which include bleach or sodium hydroxide and longer autoclaving. I don’t know if these are still used outside of known epidemics.)
Mad cow disease, or bovine spongiform encephalopathy (BSE), is also a prion disease. It transmitted between cows when they were fed a feed that contained meat and bone meal, including brain matter from cows with the disease. The incubation period is between 5 and 40 years. The source molecule is essentially a cow-originated Creutzfeld-Jakob prion, and when the prion replicates in humans, it’s probably the cause of variant Creutzfeld-Jakob disease.
Between 1900 and 1960, the Fore people of New Guinea had an epidemic of an unknown neurodegenerative disease – mostly among women – that caused shaking, difficulty walking, loss of muscle coordination, outbursts of laughter and depression, neurological degeneration, and eventually death.
The Fore tribe practiced funerary cannibalism, and women both prepared and ate the dead, including the brains, and fed them to children and the elderly. This transmitted kuru, a prion disease with an incubation period of years. The last known sufferer of kuru died in 2005.
(The source of kuru was probably a single person with CJD. There are other tribes that practiced funerary cannibalism– I wonder if any of them also had prion epidemics from eating the brains of people who spontaneously developed CJD.)
Fatal familial insomnia is a genetic prion disease. Unlike CJD or BSE, fatal familial insomnia prions target the thalamus. If your family has it, and you inherit it, you live until about 30 – then lose the ability to sleep, hallucinate, and die within months. There is no cure. There are more painful and equally fatal diseases, but this must be one of the scariest.
Undulates really get the short end of the prion stick. Chronic wasting disease affects elk and deer and can run rampant in herds. Scrapie affects sheep and goats, and makes them scrape their fleece off and then die.
Prions differ from their pathogenic, self-replicating brethren – the viruses, the bacteria, the parasites – in one major way: They don’t have DNA or RNA. They don’t even have a central means of storing information.
But studies show that prions can evolve. They can’t change their amino acid composition because they’re not involved in producing it, but do change their progeny’s folding.
This doesn’t seem surprising. The criteria for something to undergo Darwinian evolution don’t necessarily require DNA – just a self-replicator that has some level of random variation, and passes that variation down to its replicas.
Most brain prions don’t transmit, though, so it seems safe to say that the evolutionary lineages of most prions are very short – less than the lifespan of the host. Very contagious prions, like scrapies, presumably have jumped from host to host many times and have longer lineages.
Structure of death
All known mammal prions are variants of a single gene, PrP, and exist in the brain. Why?
Brain proteins are more likely to misfold than other proteins
Why? Brain proteins replicate less than other proteins, and are really really central to the body’s function.
PrP is especially liable to turn into a self-replicator if misfolded.
Predictions: Other amyloid-based brain diseases are also PrP isoforms. Prions have a similar shape that makes replication happen. Maybe PrP itself self-replicates in the body under some circumstances.
The brain clears misfolded proteins less well than other body parts.
Predictions: Other waste product buildup happens in the brain. The rest of the body has some way of combating amyloids or prions.
We know of very few prions (we know that one non-mammal animal, the ostrich, may have them.) Except in fungi. Fungi have tons of prions. Fungi prions don’t come from the same gene either – if you click through to that last link, you’ll see that the misfolds came from a variety of initial proteins that don’t appear to be related at all. Presumably, they have widely different structures.
So why are these the two prion hotbeds? Here’s what I suspect.
We know that both fungi and mammal proteins have related structures – they’re amyloids, aggregating proteins with a distinctive architecture called a cross-β-sheet. (Amyloids in general are implicated in some other diseases, and are sometimes produced intentionally as well. Spider silk has amyloids.) Beta sheets are long, sticky amino acid chains that attach to each other, forming large, water-insoluble clumps that are difficult for the body to clear.
To take an ad hoc survey that could loosely be called a literature review, let’s take the Wikipedia page for amyloid-based diseases. Of those listed, four involve deposits in the brain, and four form deposits in the kidneys (runners-up include ones that deposit in a variety of organs, and ones that deposit in the eyes.
Why the kidney? Given its role as the body’s filter, it makes sense: if a protein floats in the blood, it’ll end up in the kidney, and if multiple sticky proteins circulate, they’ll end congregate there. Wikipedia points out that people on long-term dialysis are also more likely to develop amyloidosis.
Why the brain?
The blood-brain barrier limits the reach of the immune system into the brain, where it could potentially deal with amyloids that it recognizes as foreign material. Sequestered beyond the reach of the immune system, the brain and nervous system clear loose gunk and proteins (including amyloids) via the glymphatic system, via channels in the brain called astrocytes. (The glymphatic system appears to do much of its work while you’re asleep.)
[Caution: Speculation.] I suspect that this system has a lower flow-through rate than the circulatory or lymphatic system, which are responsible for the same task on the other side of the blood-brain barrier. Fungi, including yeast, don’t seem to have robust waste-clearing systems. This might be the connection that explains how prions build up in each.
What about other multicellular organisms without circulatory systems- do prions exist for bacteria, plants, or larger fungi? I don’t think we know. I’m guessing that they exist in other animals or organisms, but since they’re made up of the same compounds as the rest of the body, it’s very difficult to find or test for a prion – if you’re not sure what you’re looking for. [/speculation]
Some notes on infectivity
Scrapie is transmitted between sheep by cuts and ingestion, and chronic wasting disease is often transmitted by ingestion, as when a sick deer dies on ground that grows grass, which is eaten by new herbivores. They can also be aerosolized (yikes).
CJD and kuru are still infectious, but less so- you have to ingest brain matter to get them.
Meanwhile, Alzheimer’s disease might be slightly infectious- if you take brain extracts from people who died of Alzheimer’s, and inject them into monkey’s brains, the monkeys develop spongy brain tissue that suggests that the prions are replicating. This technically suggests that the Alzheimer’s amyloids are infectious, even if that would never happen in nature.
What makes scrapie so much more transmissible than CJD, and CJD so much more transmissible than Alzheimer’s? I’m not sure. The shape of the prion might be relevant. Scrapie is just another mutation of PrP, so I’m not sure why no human prions have ever had the same effect (except that since scrapie is a better replicator, it would only need to have happened once in sheep.)
It might also be behavioral – sheep appear to shed scrapie in feces, and undulates have more indirect contact with their own feces than other animals (deer poop on grass, deer eat the grass, repeat.)
Even though they’re just different configurations of proteins that are already in your body, the immune system can distinguish prions from normal proteins. For a while we thought this was a problem because most immune cells can’t cross the blood-brain barrier, but it turns out some can.
Finally, for most diseases, if we eliminated all of the extant disease-causing particles, the disease would go extinct- the same way that if we kill off of species X and don’t store its DNA, species X goes extinct forever and never comes back. Creutzfeldt-Jacob is an interesting case of an infectious self-replicator where that isn’t true. Even if all CJD prions were instantly destroyed, it would emerge naturally in the genetic or spontaneous cases where the brain itself misfolds proteins, and could spread iatrogenically or through ingestion.
I tried to answer this question by doing some reading. Why should we care?
Most people don’t have a good sense of the scope and scale of biodiversity and common species on the planet. Whatever you think are the most common inhabitants of earth, you’re probably wrong.
When scientists think of “successful” organisms, they tend to think of ones with great diversity: beetles, for instance, or in terms of environments, rainforests. Looking at sheer numbers of individual species is another way of doing this.
“Okay,” you say, “Why animals, and not plants or bacteria? Those are way more common.” I study bacteriophage. I know. Two reasons: Animals have brains, which is one reason to focus on them- don’t you want to know who’s doing the majority of the world’s thinking? Secondly, it’s harder to find data on non-animals, but stay tuned.
Similarly, if you’re concerned about wild animal suffering, this may give you a sense of where best to focus your concern.
Mammals don’t come anywhere near the top, but sure, they’re furry and warm and cute and also you’re one, so let’s begin here. Humans aren’t actually a bad call as far as larger organisms- there are 7.5 billion (7,500,000,000) of us crawling around the planet, handily beating out other close competitors.
Rule 1: If you want to make an organism numerous, association with humans is a good start.
Large wild mammals are not especially common. Cows (1.4 billion) have the largest non-human large mammal population, and sheep, pigs, and goats (~1 billion each) beat out all other competitors. The curious will be interested to know that there are 50% more cats globally than dogs (600,000,000 vs 400,000,000).
So chickens are looking good so far. What about mice or rats? They’re tiny, reproduce voraciously, and also follow humans. Unfortunately, I couldn’t find good estimates on global mouse populations. Maybe there’s ten mice per human? Maybe there’s 75 billion mice. Sure. Fortunately, it doesn’t matter. Remember the grand rule of biomes:
Rule 2: Whatever’s happening in the ocean is much bigger and much wackier than anything on land.
You’ve probably never heard of the bristlemouth, genusCyclothone, a three-inch-long deep-ocean fish with a big mouth and weird teeth. As it happens, most of the planet’s surface is deep ocean. Unspecified “icthyologists” found by the New York Times speculate a population in the hundreds of trillions (> 200,000,000,000,000).
Their sheer population has only recently come to light- they’re found many meters deep into the water column and don’t surface at night, and the extent of their dominion has only recently been discovered via trawling with fine nets and the dawn of deep-sea exploration. If these “ichthyologists” can be believed, the bristlemouth is probably the most common vertebrate on earth.
Maybe you’re confused as to how there could be so many bristlemouths, since they’re relatively large compared to, say, insects. I’m not actually convinced that the trillions number is correct, but nonetheless, consider: The oceans represent 75% of the planet’s surface, and while land animals are more or less limited to a flat surface, ocean animals can “stack” in three dimensions.
Finally, a fun fact: If a bristlemouth brain weighs as much as a goldfish brain, then:
7,500,000,000 human brains * 1,350 grams/human brain = 10,000,000,000 kg
200,000,000,000,000 bristlemouth brains * 0.097 grams/bristlemouth brain = 19,400,000,000 kg
Mass of human brains ≈ mass of bristlemouth brains
Draw your own conclusions.
Rule 3: Ant biologists need to get it together.
All the world’s ants are popularly said to weigh the same amount as all the world’s human beings. It takes 16 million ants to outweigh a human, and since your garden-variety ant colony has about 4,000 ants, that would be 40,000 ant colonies per person.
This sounds ridiculous, and a University of Sussex professor suggests that it is– that ants may have outweighed humans earlier in our existence, but we’ve spread too far too quickly for them to catch up. This article posits 100,000,000,000,000 (1×1014) ants.
What’s going on here? To our instinctive brains, both of those guesses occupy a similar conceptual space as “really large numbers”, but they’re not close. They’re ten orders of magnitude apart. One of these numbers is ten billion times larger than the other. There’s one quantity of ants, or there’s ten billion times that number of ants. What?!
I have no idea. Worse yet, they’re both from the same source. The BBC can’t be a reliable news source if they don’t have a standard journalistic value for “total number of ants” that’s rough to within oh, say, five orders of magnitude.
Fortunately, we can perform a sanity check. The earth has 1.5×1014 square meters of dry land.
1×1024 global ants / 1.5×1014 square meters = ~7,000,000,000 ants per square meter
Given that we’re not swimming in ants at every single moment, we can knock off a few zeroes and come down to 1×1019 (10,000,000,000,000,000,000 or ten billion billion ants, at 70 ants per square meter, which seems more reasonable.)
Even if the most common ant species is just 1% of all ants, where ants ranks depends drastically on which value the right value is. Bristlemouths might outnumber them, or they might not. Dear ant researchers: work on this, but at the least, stop telling people there are 1×1024 ants. That’s too many ants.
(While researching this, I also learned about the long and short scales– everyone uses the same “million”, but my “trillion” may not be the same as your “trillion”. While normally I try to avoid being prescriptivist about language, this is a terrible use of words and everybody should either use lots of zeroes or scientific notation from here on out. Ugh. Anyways.)
The antarctic krill is the foundation of the antarctic ecosystem. It feeds whales, seals, squids, fish, and everything else. 500 million tons of it exist, and Wikipedia claims it’s probably the most abundant species on the planet. Using Wikipedia’s mass value of up to 2 grams (say, 1.5 grams on average), that’s 3×1014 (300,000,000,000,000) krill.
Rule 4: Maybe we just don’t know what’s going on.
Let’s talk about uncertainty. There are a couple other candidates. They may easily hold the title, but I don’t know because nobody has done the research. There are certainly plausible reasons to suspect any of them of holding the title, and we can use Fermi calculations for the sake of a guess, but I don’t expect these to be very accurate.
Most of the guesses above did come with specific numbers, but aren’t necessarily completely trustworthy. Articles written about ants, antarctic krill, nematodes, and copepods have all variously claimed to be the most common animal. It seems like this could happen because of the availability bias– if you’re a krill biologist, and someone asks you what the most common animal is, and you know that there are a whole lot of krill, you’re probably going to say krill.
Narrowing down a common species is also more difficult- I can attest (from work with tiny snails) that doing field identification via microscope is the worst. So presumably, most studies don’t do it, and focus on the broader picture.
Alternatively, invertebrate researchers have field-wide conspiracies in order to get more grant money. Invertebrate researchers are welcome to deny this in the comments.
Tiny free-swimming ocean crustaceans, at the root of many food chains.
Some scientists say they form the largest animal biomass on earth.
Copepods almost certainly contribute far more to the secondary productivity of the world’s oceans, and to the global ocean carbon sink than krill, and perhaps more than all other groups of organisms together. – Wikipedia
Frustratingly, as with the nematodes, nobody seems to know what the most common copepod is.
My probable candidate:
A small cosmopolitan mid-ocean-level copepod.
Copepod expert Geoff Boxshall on Plankton Safari estimates 1.3×1021 (1,300,000,000,000,000,000,000) copepods. If the most common species represents 1% of all copepods, that’s 1.3×1019 of a common copepod species out there.
But I think we can do better.One study found an average 20 zooplankton per cubic meter in the Atlantic ocean, with occasional high spikes and huge seasonal variation. If we assume that such a number is constant over all the oceans and throughout the euphotic zone (the top layer of the ocean that receives sunlight and supports photosynthesis), that adds up to at least 5.78×1017 plankton. Since we know copepods are quite common, let’s say that 50% of the zooplankton is copepods, and that the most common species represents 1% of all copepods. That’s:
5.78×1017 zooplankton worldwide x (50% copepods) x (1% of the most common species) = 2.89×1015 of the most common copepod.
They are ubiquitous in freshwater, marine, and terrestrial environments, where they often outnumber other animals in both individual and species counts, and are found in locations as diverse as mountains, deserts and oceanic trenches. – Wikipedia
Everyone (read: all scientists who have expressed an opinion on the matter) seems to think that nematodes are incredibly numerous. That said, Nematoda is a very broad umbrella- sort of like saying that there aren’t very many Chordates (the phylum that contains all vertebrates plus a handful of squishy sea creatures.) Bristlemouths, meanwhile, are narrowed down to a single genus of only a dozen species.
My guesses for a candidate Most Common Nematode are:
A small, free-living, deep ocean floor or mid-ocean-level species
A small parasitic nematode that inhabits cattle or bristlemouth guts.
(Why these two? My educated guess is that smaller animals tend to be more common, and that the smallest species are routinely parasites. Other small species tend to be among the more numerous free-living animals- think mice and Palegibacter ubique.)
My extrapolations (more details on those numbers) from a 2006 study of benthic microfauna – very small animals living on the ocean floor at various depths – suggest that there are maybe 9.03×1019 such critters in Earth’s oceans. These include nematodes, benthic copepods, and other species. As with copepods, let’s guess that half of these are nematodes, and that 0.1% of nematodes are in the most prolific species.
9.03×1019 microfauna on the ocean floor x (50% nematodes) x (0.1% of nematodes in the most common species) = 4.52×1016 of a common nematode species.
This aligns well with another, rougher back of the envelope calculation from a different source:
Roughly 2000 nematodes / square meter * (5.1×1014 meters on the ocean floor) * (1% of nematodes in most common species) = 1.02×1016 (1,020,000,000,000,000) of a common nematode species.