Category Archives: existential risk

The Germy Paradox – Filters: Hard and soft skills

This is the third post in a sequence of blog posts on state biological weapons programs. Others will be linked here as they come out:

1.0 Introduction
2.1 The empty sky: A history of state biological weapons programs
2.2 The empty sky: How close did we get to BW usage?
3.1 Filters: Hard and soft skills
3.2 Filters: The shadow of nuclear weapons
3.3 Filters: A taboo
4.0 Conclusion: Open questions and the future of state BW   

Welcome to the second half of our series. (This is post 3.1.) I’ve established that despite extensive historical weapons programs, biological weapons haven’t really been used in a major way since WWII. We don’t ever seem to have been a “close call” away from biological warfare. Why not?

I don’t have a complete answer. I have some pieces of the answer, though. The first piece, and one very good answer, is that BW are not as cheap and deadly as commonly thought, and that substantial resources and expertise are needed to successfully create biological weapons. This argument is well-made by Sonia Ben Ouagrham-Gormley in her book Barriers to Bioweapons – a combination between hard technical skills and soft skills like poor management. I’ve written a summary of the book on this blog before.

That post will act as Part 1 of this section.

References

Ben Ouagrham-Gormley, Sonia. Barriers to Bioweapons: The Challenges of Expertise and Organization for Weapons Development. Cornell University Press, 2014.

The Germy Paradox – The empty sky: How close did we get to BW usage?

This is the third post in a sequence of blog posts on state biological weapons programs. Others will be linked here as they come out:

1.0 Introduction
2.1 The empty sky: A history of state biological weapons programs
2.2 The empty sky: How close did we get to BW usage?
3.1 Filters: Hard and soft skills
3.2 Filters: The shadow of nuclear weapons
3.3 Filters: A taboo
4.0 Conclusion: Open questions and the future of state BW   

 I’ve heard a lot about “nuclear close calls.” Stanislav Petrov was, at one point, one human and one uncomfortable decision away from initiating an all-out nuclear exchange between the US and the USSR. Then that happened several dozen more times. As described in Part 1, there were quite a few large state biological weapons programs after WWII. Was a similar situation unfolding there, behind the scenes like the nuclear near-misses? Were any biological apocalypses-that-weren’t quietly hidden in the pages of history? What were the actual usage plans for biological weapons, and how close did any of them come to deployment?  

First, to note that there are a few examples of state uses of BW in modernity – the most major being the Japanese BW program’s use of lethal weapons against Chinese civilians in the 1940’s. More recently, the Rhodesian BW program used weapons against their own citizens on small scales in the 1970’s, and South Africa program operatives attempted to sicken a refugee camp’s water supplies with cholera in 1989 (but failed). There are also a few ambiguous incidents, and the possibility of unnoticed usage. These incidents and possibilities are concerning, but are rare and on small scales. 

The claim I assert is that since the 1940’s, there have been no instances of the sort of large-scale usage of BW in international warfare, like that which the imagination conjures upon hearing the phrase “biological weapons.” My research suggests, more tentatively, that not only were there no such uses, but there were no close calls – that even in major state programs that created and stockpiled weapons in quantity, these weapons were virtually never included into war plans, prepared for usage or, especially, used. The strategies they were intended for were were generally vague and retaliatory to offensive uses of biological or nuclear weapons.  This has another bonus: understanding what biological weapons were ever made in mass quantity, or were ever seen to have particular strategic value.

Ways of classifying BW

Note two failure modes which are common in existing literature which tries to extrapolate from past bioweapons plans: first, an overly narrow reference class limited to weapons that were actually used in combat, of which there are very few; second, an overly broad class composed of every pathogen or plan that programs ever experimented with. The latter would include many plans that were impractical or undesirable and eventually abandoned. No literature thus far has focused on this middle ground between these extremes, or analyzed the strategic goals of such usages in depth. 

Addressing BW under a framework of planned usage is rare. Categorizations of weapon usage are also uncommon. Most commonly found is the sorting of BW strategies into the theater of war – strategic, tactical, and operational uses (Koblentz 2009, Leitenberg et al 2012). This categorization system is broadly applicable and not agent-based, and thus may apply to future weapons with novel agents as well as to past programs. On the other hand, three abstract categories are not enough to be predictive, and many usages could plausibly be described as affecting multiple theaters. Any new categorization system should factor in historical strategies and motivations as a reference class to understand possible future uses.

No concrete intentions

To illustrate why considering usage plans is valuable, note that many sophisticated BW programs did not in fact make war plans that incorporated their weapons. In many cases, this is because early conceptual research was time-consuming and unsuccessful, and the results were considered inferior to conventional or nuclear weapons. UK, South African, Canadian, and French programs never resulted in mass-production of a finalized weapon, much less strategies for such a weapon’s use (Balmer 1997, Wheelis et al 2006). The South African and UK programs both explored unusual intended uses (assassinations and fertility control (Purkitt and Burgess 2002), and area-denial, respectively (Wheelis et al 2006), but no weapons were created which would fulfill them. Even the USSR program, which weaponized more agents than any other BW program in history (Leienberg et al 2012), failed to make concrete war plans involving BW (Wheelis et al 2006). The broader intents of these programs are still interesting, but clearly the relative importance of various programs changes when considering concrete war plans. 

“No first use” of BW

Many state biological weapons programs post-WWII, and their strategies for weapons usage, were heavily intertwined with nuclear policy. The UK BW program, since 1946, was guided by a principle of “no first use” that echoes that of their nuclear program – that they would not use biological weapons unless the UK was first attacked with biological or nuclear weapons (Balmer 1997). This was also the original stated purpose of the US program, although the “no first use” doctrine was officially rescinded in 1956 (Wheelis et al 2006). Information revealed after the Iraq BW program survey group investigation in 2004 revealed Saddam Hussein’s intent in 1991 to use BW to retaliate against “unconventional” weapon attacks by the US (“Iraq Survey Group” 2004). The USSR was likely also interested in using weapons after a nuclear exchange with the US (Leitenberg et al 2012). In this way, BW join with nuclear weapons as a facet of mutually-assured destruction dynamics. 

BW have long been considered “the poor man’s nuclear weapon”. Numerous studies and documents have suggested that biological weapons are cheaper per death-caused than nuclear weapons (Koblentz 2004), are relatively simple to construct, and could be developed by less-wealthy states to give them the MAD protection or bargaining capability of a full-fledged nuclear power. This idea has been criticized in several ways – for instance, by Gregory Koblentz, who points out that biological attacks are not reliable in the same way that nuclear attacks are and thus do not fulfil the major goals of deterrence (Koblentz 2009), and by Sonia Ben Ouagrham-Gormley, who observes that biological weapons are in fact significantly more difficult to produce than commonly believed (Ben Ouagrham-Gormley 2014).

Nonetheless, the perception that biological weapons are an alternative to nuclear weapons is widespread, including by powerful figures like Hillary Clinton (Reuters 2011) and Bill Gates (Farmer 2017). In light of this, it is interesting to observe that all three programs for which “no first use” was a primary strategic goal – the UK, US, and Iraq programs – were also nuclear powers or pursuing nuclear weapons.  The UK and US BW programs were indeed later abandoned in favor of developing and relying on on nuclear weapons programs for defense and deterrence (Balmer 1997).

Non-lethal weapons

The popular image of BW as lethal antipersonnel weapons does not describe the whole of BW development, but certainly describes large swathes of it. The USSR, US, and Iraq programs all mass-produced lethal weapons; for example, anthrax-filled munitions. Of these, it seems that the Iraq program was the only one that actually deployed such weapons. Saddam Hussein was particularly interested in anthrax, not just for its virulence, but for its ability to remain active in soil and potentially cause death over many years.

Despite the prominence of the above, both in actuality and in the public eye, a great deal of effort focused on biological weapons has been focused on non-lethal weapons. These may be incapacitating antipersonnel weapons, or weapons with non-human targets such as plants or animals to attack the enemy’s agricultural system. Desiderata between lethal and non-lethal biological weapons has not been well-explored by the literature. It has been suggested that caring for incapacitated soldiers, particularly those who are sick over a long period of time, can be more burdensome on an enemy than in simply burying and replacing killed soldiers (Pappas et al, 2006). While strongly dependent on the pathogen and expected death rates, this is a practical reason these weapons may be preferred. Inflicting nonlethal or economic damage is also seen as more humane than lethal damage, and perhaps more politically acceptable. Political acceptability was, for instance, part of the internal US motivation for using anti-plant chemical weapons rather than human-targeted weapons in the Vietnam War (Pearson 2006). However, humanitarian or political motivations for developing biological weapons have not been described or well-documented in research.

Some of the first modern BW production was for weapons aimed at cattle: before WWII, the US and UK collaborated on producing anthrax-infected cow cakes, meant to be scattered over German fields to damage agriculture. Later, the US program explored agricultural weapons until 1958 (Hay 1999), and two of the twelve weapons finalized by the US program were agricultural – rice blast and wheat stem rust (Wheelis et al 2006). The USSR did not stockpile anti-agriculture weapons, but a variety of them were developed by a sub-program of the BW apparatus code-named Ecology (Koblentz 2009), and production was designed to scale quickly if needed (Alibek 1999). These would not be used in conditions of total war with the US, but in smaller wars with smaller nations (Alibek 1999).            

Both the US and the USSR also invested in incapacitating weapons for use against humans. The USSR weaponized the rarely-lethal Brucella sp. and Burkholderia mallei, as well as Venezuelan equine encephalitis virus (VEEV), which causes flu-like symptoms. The US also invested heavily in incapacitating weapons. Four out of ten standardized antipersonnel weapons produced by the US BW program were not intended to be lethal: Brucella suis, Coxiella burnetti and VEEV, as well as the staphylococcus-derived Enterotoxin B (Wheelis 2006). In what may have been the closest call of major use of a BW since WWII, airplanes were loaded with VEEV-containing weapons during the Cuban missile crisis (Guillemin 2004). This was unauthorized, but nonetheless seems to have come very close to real battlefield usage.           

Here’s a summary of my findings by program:

CountryProgram active periodUsageStrategic goal for weaponsDegree of progress in creation
USSR1925-1990Never usedSomewhat unclear, as second-strike capability (to ensure complete destruction), as a first-strike weapon against small countriesMultiple biological weapons created and stockpiled
UK1942-1956Never usedAs second-strike capabilityNo weapons ever developed to completion
USA1942-1970Never used, nonlethal weapon loaded onto plane in one instanceUnclear, complement to conventional weaponsMultiple biological weapons created and stockpiled
Canada1940’sNever usedUnclearNo weapons ever developed to completion
French1940’sNever usedUnclear No weapons ever developed to completion
Japan1940’sUsed weapons in international warfare.Death, area denial, regular weapon of warMultiple biological weapons created, stockpiled, and used
Israel1948 – ???Probably never usedUnclear, perhaps as deterrenceUnclear
Rhodesia1975-1979Used weapons against dissidents inside the country.As internal tool.Crude weapons created and used (e.g. placing cholera in food and water supplies.)
Iraq1980s-1996Never usedAs deterrence, second-strikeMultiple biological weapons created and stockpiled
South Africa1983-1988One instance of attempted use (failed)Assassination, fertility control, control, possibly as regular weapon of warNo weapons ever developed to completion

[Bold country name = BW used. Italics: BW use attempted or “almost used.”]


References

Alibek, Kenneth. “The Soviet Union’s Anti-Agricultural Biological Weapons.” Annals of the New York Academy of Sciences 894, no. 1 (December 1, 1999): 18–19. https://doi.org/10.1111/j.1749-6632.1999.tb08038.x.

Balmer, Brian. “The drift of biological weapons policy in the UK 1945–1965.” The Journal of Strategic Studies 20, no. 4 (1997): 115-145.

Farmer, Ben. “Bioterrorism Could Kill More People than Nuclear War, Bill Gates to Warn World Leaders.” The Telegraph, February 17, 2017. https://www.telegraph.co.uk/news/2017/02/17/biological-terrorism-could-kill-people-nuclear-attacks-bill/.

Guillemin, Jeanne. Biological weapons: From the invention of state-sponsored programs to contemporary bioterrorism. Columbia University Press, 2004.

Hay, Alastair. “Simulants, Stimulants and Diseases: The Evolution of the United States Biological Warfare Programme, 1945–60.” Medicine, Conflict and Survival 15, no. 3 (July 1, 1999): 198–214. https://doi.org/10.1080/13623699908409459.

“Iraq Survey Group Final Report.” Iraq Survey Group, September 30, 2004. https://www.globalsecurity.org/wmd/library/report/2004/isg-final-report/.

Koblentz, Gregory. Living Weapons. Ithaca: Cornell University Press, 2009.

Koblentz, Gregory. “Pathogens as weapons: The international security implications of biological warfare.” International security 28, no. 3 (2004): 84-122.

Leitenberg, Milton, Raymond Zilinskas, and Jens Kuhn. The Soviet Biological Weapons Program: A History. Cambridge: Harvard University Press, 2012.

Ouagrham-Gormley, Sonia Ben. Barriers to Bioweapons: The Challenges of Expertise and Organization for Weapons Development. Cornell University Press, 2014.

Pappas G, Panagopoulou P, Christou L, Akritidis N. Brucella as a biological weapon. Cell Mol Life Sci 2006; 63: 2229-36.

Pearson, Alan. “Incapacitating Biochemical Weapons.” The Nonproliferation Review 13, no. 2 (July 1, 2006): 151–88. https://doi.org/10.1080/10736700601012029.

Purkitt, Helen E., and Stephen Burgess. “South Africa’s Chemical and Biological Warfare Programme: A Historical and International Perspective.” Journal of Southern African Studies 28, no. 2 (2002): 229–53.

Reuters, Thomas. “Hillary Clinton Warns Terror Threat from Biological Weapons Is Growing.” The National Post, December 7, 2011. https://nationalpost.com/news/biological-weapons-threat-growing-clinton-says.

Wheelis, Mark, Lajos Rozsa, and Malcolm Dando. Deadly Cultures: Biological Weapons since 1945. Cambridge, MA: Harvard University Press, 2006.

The Germy Paradox – The empty sky: A history of state biological weapons programs

This is the second post in a sequence of blog posts on state biological weapons programs. Others will be linked here as they come out:

1.0 Introduction
2.1 The empty sky: A history of state biological weapons programs
2.2 The empty sky: How close did we get to BW usage?
3.1 Filters: Hard and soft skills
3.2 Filters: The shadow of nuclear weapons
3.3 Filters: A taboo
4.0 Conclusion: Open questions and the future of state BW             

A lot of ink has been consecrated in describing the history of modern BW programs. Martin et al’s “Chapter 1: History of Biological Weapons: From Poisoned Darts to Intentional Epidemics.” In Medical Aspects of Biological Warfare (2018) [PDF] is a superb short summary with a focus on the agents and weapons explored by these programs. It’s far better than I could write, and relevant to the rest of this sequence. I recommend reading it.

I present a condensed timeline of state BW programs within the last century here as well:

CountryProgram active periodNotes
USSR1925-1990Major expansion and resurgence during approximately 1970.  By far the largest BW program to have ever existed. A great deal of our information on it is gleaned from Ken Alibek’s memoir Biohazard.
Japan1940’sWeapons were extensively tested and used against the Manchurian Chinese population.
USA1942-1970A fairly large and well-run program, similarly technically competent as the USSR program, despite having 10-fold less funding. Ended with the US beginning the Biological Weapons Convention, which came into effect in 1975.
UK1942-1956Worked closely with the US BW program. Later helped introduce the Biological Weapons Convention.
Canada1940’sPartially sponsored/encouraged by the US.
French1940’sDetails are scarce.
Israel1948 – ???Details are scarce.
Rhodesia1975-1979Crude weapons created and used (e.g. placing cholera in food and water supplies.)
Iraq1980s-1996Created in response to the external threat represented by Israel. The Iraq Survey Group Final Report is a comprehensive summary.
South Africa1983-1988Mostly controlled by one person. Focus was on weapons against internal threats and assassination, not large-scale offensive weapons against other nations.

References

The reference mentioned in the first paragraph is: Martin, James W, George W Christopher, and Edward M Eitzen. “Chapter 1: History of Biological Weapons: From Poisoned Darts to Intentional Epidemics.” In Medical Aspects of Biological Warfare, 20. Textbooks of Military Medicine. Fort Sam Houston: Office of the Surgeon General, Borden Institute, 2018.

Iraq: See the 2004 US Iraq Survey Group Final Report. (Available here.)

Israel: See the Nuclear Threat Initiative’s summary (available here.)

Other country’s details are either in the Martin et al piece linked above, or may be cross-pollinated from other sources.

The Germy Paradox: An Introduction

I’m writing a sequence of blog posts on state biological weapons programs. This is the first post. Others will be linked here as they come out:

1.0 Introduction
2.1 The empty sky: A history of state biological weapons programs
2.2 The empty sky: How close did we get to BW usage?
3.1 Filters: Hard and soft skills
3.2 Filters: The shadow of nuclear weapons
3.3 Filters: A taboo
4.0 Conclusion: Open questions and the future of state BW   

This is a bit of a review of my first year of graduate school. Unintentionally, many of my projects that year revolved around a question I’ve been mulling over for a long time. I’m calling it by a terrible nickname, “the germy paradox”, as a counterpart to the question posed by Richard Fermi on the apparent nonexistence of aliens in a vast and star-filled night sky: “If biological weapons are as cheap and deadly as is everyone seems to fear, then where are they?”      

There are few known state bioweapon (BW) programs, and very few at all since the 1970’s. This is despite greater interest and advances in life sciences, as well as a lot of active concern over biodefense. Why don’t we see many state BW programs? Is this for concrete and technological reasons, for strategic reasons, or for reasons of social unacceptability? If some of these reasons go away, for instance as biotechnologies become cheaper and easier to weaponize (as many, e.g. Mukunda et al 2009, Schmidt 2008, believe they will), how safe will we be then?

The history of the biological weapons taboo is multifaceted and will be addressed in greater detail. But some basic background will help kick off this series and explain the question. Biological and chemical weapons are often considered in the same breath (Smith 2014), referred to in early history as “poison” weapons (Cole 1998). Explicit prohibitions of the use of poison weapons in warfare began in the 1400s (Price 1995). Use of biological (or at the time, “bacteriological”) weapons was formally forbidden in combat by the 1925 Geneva Protocol, but several state BW programs went on to proliferate during WWII and the Cold War (Zanders 2003). These programs often stated that their weapons would be used only in retaliation against a nuclear or biological attack, in a version of the popular nuclear “no first use” policy, and they rarely developed clear strategies for BW wartime usage (see Wheelis and Rózsa 2018).

States today are constrained by the Biological Weapons Convention, an international protocol banning not only usage but development of BW. It was developed by the US in 1972 alongside removal of the US’s own program, and signed by all but a handful of countries (UN Office).

 Today, no nation claims to have a BW program, nor to have used BW for large-scale attacks since WWII.


I use the language of the Fermi Paradox to structure this series. 

The first part of this series, “The Empty Sky,” describes the modern history of biological weapons. It then makes the case that we have never seen large-scale usage or near-usage of these weapons since WWII. 

The second part, “Filters”, borrows from Robin Hanson’s model of “great filters” that stand in the way of planets creating space-faring civilizations, thus creating the Fermi Paradox (Hanson, 1998). This section explores reasons we don’t see biological weapons as much as we might expect.


New posts in this sequence will come out twice a week, on Mondays and Thursdays, until it’s done. (This is to give readers time to keep up.)


References

  1. Mukunda, Gautam, Kenneth A. Oye, and Scott C. Mohr. “What rough beast? Synthetic biology, uncertainty, and the future of biosecurity.” Politics and the Life Sciences 28, no. 2 (2009): 2-26
  2. Schmidt, Markus. “Diffusion of synthetic biology: a challenge to biosafety.” Systems and synthetic biology 2, no. 1-2 (2008): 1-6.]
  3. Smith, Frank. American biodefense: How dangerous ideas about biological weapons shape national security. Cornell University Press, 2014.
  4. A. Cole, Leonard. “The Poison Weapons Taboo: Biology, Culture, and Policy.” Politics and the Life Sciences 17 (September 1, 1998): 119–32. https://doi.org/10.1017/S0730938400012119.
  5. Price, Richard. “A Genealogy of the Chemical Weapons Taboo.” International Organization 49, no. 1 (1995): 73–103.
  6. Zanders, Jean Pascal. “International Norms Against Chemical and Biological Warfare: An Ambiguous Legacy.” Journal of Conflict and Security Law 8, no. 2 (October 1, 2003): 391–410. https://doi.org/10.1093/jcsl/8.2.391.
  7. Wheelis, Mark, and Lajos Rózsa. Deadly cultures: biological weapons since 1945. Harvard University Press, 2009.
  8. “The Biological Weapons Convention.” The United Nations Office at Geneva. Accessed December 11, 2018. https://www.unog.ch/80256EE600585943/(httpPages)/04FBBDD6315AC720C1257180004B1B2F.
  9. Hanson, Robin. “The Great Filter – Are We Almost Past it?”. 1998. (Web Archive) https://web.archive.org/web/20100507074729/http://hanson.gmu.edu/greatfilter.html

Seattle-Oxford Petrov Day 2018

My friend Finan wrote up an account of the 2018 Seattle and Oxford Petrov Day Faux Nuclear Crisis.

Recognizing this, a couple of community members in Seattle whipped up a program to have mutually assured party destruction for Petrov Day. In the game you are told if the other party has launched and given time to retaliate if you so choose. Both parties successfully made it through several false alarms without nuking each other. It could be a testament to our general feelings of well being towards each other, and to lack of real incentive to nuke the other party–aside from protecting your own.

[…]

One of the partygoers in Seattle pressed the launch button at -1 seconds. 1 second past the end time, they were assuming the game was over and enjoying the humor of an actually quite low stakes game. The server and the Seattle computer were slightly out of sync, so although the game appeared over from the Seattle end, it was not according to the server.

“If our extinction proceeds slowly enough to allow a moment of horrified realization, the doers of the deed will likely be quite taken aback on realizing that they have actually destroyed the world. Therefore I suggest that if the Earth is destroyed, it will probably be by mistake. ” – Eliezer Yudkowsky


The “Big Red Button” approach to the day has only been around for the last 3 years, but it’s seen some adoption since. This is notably the first time that the button has been pressed at a party.

Further reading:

More on Petrov Day

There Is A Button

Book review: The Doomsday Machine

This book and the movie Dr. Strangelove are my two recommendations for learning about why you should still be concerned about nuclear war. The Dr. Strangelove post is coming soon. For now, The Doomsday Machine by Daniel Ellsberg:

…is a book about designing the end of the world as we know it, chronologically through Daniel Ellsberg’s career as a nuclear war planner. It’s well written, and Ellsberg makes a compelling hero.

He’s most famous for leaking the Pentagon Papers, government documents on the Vietnam War that contributed to Nixon’s resignation. This book came out of a second set of documents he photocopied and intended to release after his trial for the Pentagon Papers, but lost in an act of nature. Early on, he describes this second planned leak as the one that he fully expected to put him in jail for the rest of his life, and how he felt the loss of those documents as both a tragedy for the nation, and a blessing that allowed him to spend the following decades beside his wife. It’s the kind of thing that makes you glad you’re driving alone when your audiobook is making you tear up in the desert along the Washington-Idaho border.

But all this just helps – the real meat of the book is in the systems he describes.


Let’s talk about nuclear winter real quick. (My favorite line on dates.) Ellsberg puts this at the end, which makes sense chronologically, but it’d be burying the lede for an x-risk focused blog, so let’s get it out there now:

All of our plans for cold war were decided before anyone knew about nuclear winter. I feel like I should capitalize that – Nuclear Winter. It’s the hypothesized event where nuclear explosions cause fired in cities that launch so much ash into the stratosphere that it blots out the sun for months and makes it impossible for plants to grow, killing most human and large animal life. There’s uncertainty around the specifics, but its existence is generally agreed upon in the scientific community.

All US strategy during the early Cold War hinged on this idea of “general war”, an all-out nuclear exchange with Russia and China. General War included dropping enormous nuclear weapons on literally every single city in both Russia and China. Obviously, this is atrocious enough – this level of calamity was expected to kill something like 20% of the world’s population at the time, mostly from fallout.

But every time general war was mentioned, a little voice in my head yelled “nuclear winter!” – that the death toll is actually >90% of humanity, Americans, Russians, Chinese, and everyone else alike, unbeknownst to everybody at the time. My loose impression is that there’s not substantial reason to believe that nuclear war planning policy ever shifted to account for this fact.


Another quick takeaway: the US planned on making the first nuclear strike on Russia and China throughout the Cold War. Today we have a perception that the US only plans for using a second strike, but almost the entirety of planning material is based on the supposition of the US using nuclear weapons first. Again, there’s little reason to suspect this has changed now.


Through this book, I was repeatedly reminded of the Litany of Jai: Almost nobody is evil, almost everything is broken. The problems described in the book aren’t the result of insanity or complete carelessness, but instead a horrifying spider web of incentives, laid unwittingly by people with limited goals and limited knowledge. It’s a sinister net of multipolar traps. If you follow this web down, as Ellsberg does, you find yourself looking into the yawning chasm of a nuclear apocalypse – not built on purpose, but built nonetheless.

Let’s look at how some of these tangled incentives lead us there.


  • Branches of the military want high budgets.
  • Budget decisions are made based on intelligence from those branches.
  • Branches compete with each other for funding from Congress and other officials.

  • Various branches hugely overestimate enemy capacities.
  • E.g. the army reports extremely high Soviet ground force numbers.
  • The Air Force reports extremely high Soviet ICBM capacity.
  • Inter-branch coordination gets trampled.
  • There is no incentive for estimates or behavior that aligns with strategy or reality.

  • All military branches want to get in all-out war if/when it happens.

  • The Pacific Navy basically insists on attacking Asia alongside Russia in all cases, because they want to be involved and don’t just want to attack minor Siberian targets “on the sidelines” of The Big War.
  • Nuclear plans have Moscow area getting blanketed with hundreds of nuclear bombs from all sides. “Hundreds of nuclear bombs” is a phrase that here and elsewhere means “calamitous overkill”.

  • Military branches don’t want to listen to civilian politicians.
  • Civilian politicians are powerful decision-makers.

  • Information is concealed, including from the president (for instance, the JSCP, which is the detailed plan for all-out war).
  • Military leaders just don’t listen to civilians who outrank them (e.g. in moving ships with nuclear warheads illegally stationed in Japanese ports).
  • Civilian President Kennedy is politically obliged not to override poor decisions made by President Eisenhower, the famous military general.

  • Nuclear bomber pilots need to receive an authorized signal to enact plans for bombing Russian and Asian targets.
  • Air force planners want as little delay as possible in executing war plans once they get the order.
  • Air Force planners want to save time and effort.

  • Authorization codes are stored in plaintext in envelopes in each plane, are the same between every plane, and are rarely changed.
  • Any pilot who realized this could easily lead their base in a nuclear strike, and almost certainly trigger all-out nuclear war.
  • There’s no way in the target database to easily distinguish Russian and Chinese targets, so everyone at Air Force bases assumes that if they get the war order, they’ll just drop nuclear weapons on everyone. All Chinese cities were going to be destroyed under every nuclear attack plan, throughout the entire early Cold War.

  • Communications systems with Washington DC might be destroyed if Russia attacks the US with nuclear weapons first.
  • Communication systems between bases might be destroyed during a Russian attack.
  • Communications in general are pretty unreliable.
  • Everyone in the military chain of command, including the President, wants the US to be able to respond as quickly as possible to a Russian first strike.

  • Ability to initiate a nuclear war is secretly delegated down the chain of command in cases where bases are not in touch with Washington DC.
  • Contact with Washington DC is often unreliable – for hours every day on some bases in the Pacific.
  • Basically anyone in the chain of command is not just capable of, but entirely authorized to, declare total nuclear war most of the time.

This are not even every example. A story retold in many different forms throughout the entire book goes like this:

  1. Daniel Ellsberg learns about one of these outcomes.
  2. Ellsberg talks to some relevant officials and outlines a possible catastrophe.
  3. The officials go still, think about it, and say with concern, “That seems entirely possible.”
  4. Nothing changes, ever.

A possible solution for most of these spiraling incentives is a countervailing force, balancing the dynamic back away from “total catastrophe”. An actor, or an incentive, or something. Often, that does not exist – in the veil of secrecy surrounding nuclear war, any party with an incentive to care about the implied risk isn’t aware of the entire situation, and can’t unilaterally fix it if it exists. Ellsberg tries to repair these flawed systems when he notices them, but has little power to do so.

He talks about how he suspects that some leaders, including President Kennedy, never had real intentions of using nuclear weapons, but even if that’s true, the scenarios above suggest that presidential intent may have had little to do with the outcome.

Ellsberg’s knowledge of the situation drops off in the 70’s or so when he started doing other work. Are all of these nuclear war and control systems still like this?? Maybe??!! Certainly nobody was rushing to reform them throughout his long tenure with the government.

I don’t know what to do about any of this. This book illuminates the number of needles we somehow threaded in avoiding catastrophe since the start of the Cold War. Here’s where you can get it.

Why was smallpox so deadly in the Americas?

In Eurasia, smallpox was undoubtedly a killer. It came and went in waves for ages, changing the course of empires and countries. 30% of those infected with the disease died from it. This is astonishingly high mortality from a disease – worse than botulism, Lassa Fever, tularemia, the Spanish flu, Legionnaire’s disease, and SARS.

In the Americas, smallpox was a rampaging monster.

When it first appeared Hispaniola in 1518, it spread 150 miles in four months and killed 30-50% of people. Not just of those infected, of the entire population1. It’s said to have infected a quarter of the population of the Aztec Empire within two weeks, killing half of those2, and laying the stage for another disease to kill many more3. 

Then, alongside other diseases and warfare, it contributed to 84% of the Incan Empire dying4.

Among the people who sometimes traded at the Hudson Bay Company’s Cumberland House on the Seskatchewan River in 1781 and 1782, 95% seemed to have died. Of them, the U’Basquiau (also called, I believe, the Basquia Cree people) were entirely killed5.

Over time, smallpox killed 90% of the Mandan tribe, along with 80% of people in the Columbia River region, 67% of the Omahas, and half of the Piegan tribe and of the Huron and Iroquois Confederations6.

Here are some estimates of the death rates between ~1605 and 1650 in various Northeastern American groups. This was during a time of severe smallpox epidemics. Particularly astonishing figures are highlighted (mine).

highlightedtable

Figure adapted from European contact and Indian depopulation in the Northeast: The timing of the first epidemics[^7]

Most of our truly deadly diseases don’t move quickly or aren’t contagious. Rabies, prion diseases, and primary amoebic meningoencephalitis have more or less 100% fatality rates. So do trypanosomiasis (African sleeping sickness) and HIV, when untreated.

When we look at the impact of smallpox in the Americas, we see extremely fast death rates that are worse than the worst forms of Ebola.

What happened?

In short, probably a total lack of previous exposure to smallpox and the other pathogenic European diseases, combined with cultural responses that helped the pathogen spread. The fact that smallpox was intentionally spread by Europeans in some cases probably contributed, but I’m not sure how much.

Virgin soil

Smallpox and its relatives in the orthopox family – monkeypox, cowpox, horsepox, and alastrim (smallpox’s milder variant) – had been established in Eurasia and Africa for centuries. Exposure to one would give some immune protection to the others. Variolation, a cruder version of vaccination, was also sometimes practiced.

Between these, and the frequent waves of outbreaks, a European adult would have survived some kind of direct exposure to smallpox-like antigens in the past, and would have the protection of antibodies to it, preventing future sickness. They would also have had, as children, the indirect protection of maternal antibodies, protecting them as children1.

In the Americas, everyone was exposed to the most virulent form of the disease with no defenses. This is called a “virgin soil epidemic”.

In this case, epidemics would stampede through occasionally, ferociously but infrequently enough for any given tribe that antibodies wouldn’t successfully form, and maternal protection didn’t develop. Many groups were devastated repeatedly by smallpox outbreaks over decades, as well as other European diseases: the Cocolizti epidemics3, measles, influenza, typhoid fever, and others7.

In virgin soil epidemics, including these ones, disease strikes all ages: children and babies, the elderly and strong young adults6. This sort of indiscriminate attack on all age groups is a known sign in animal populations that a disease is extremely lethal8. In humans, it also slows the gears of society to a halt.

When so much of the population of a village was too sick to move, not only was there nobody to tend crops or hunt – setting the stage for scarcity and starvation – but there was nobody to fetch water. Dehydration is suspected as a major cause of death, especially in children16. Very sick mothers would also be unable to nurse infants6

Other factors that probably contributed:

Cultural factors

Native Americans had some concept of disease transmission – some people would run away when smallpox arrived in their village, possibly carrying and spreading the germ7. They also would steer clear of other tribes that had it. That said, many people lived in communal or large family dwellings, and didn’t quarantine the sick to private areas. They continued to sleep alongside and spend time with contagious people6.

In addition, pre-colonization Native American measures against diseases were probably somewhat effective to pre-colonization diseases, but tended to be ineffective or harmful for European diseases. Sweat baths, for instance, could have spread the disease and wouldn’t have helped9. Transmission could also have occurred during funerals10

Looking at combinations of the above factors, death rates of 70% and up are not entirely unsurprising.

Use as a bioweapon

Colonizers repeatedly used smallpox as an early form of biowarfare against Native Americans, knowing that they were more susceptible. This included, at times, intentionally withholding vaccines from them. Smallpox also spreads rapidly naturally, so I’m not sure how much contributed to the overall extreme death toll, although it certainly resulted in tremendous loss of life.

Probably not responsible:

Genetics. A lack of immunological diversity, or some other genetic susceptibility, has been cited as a possible reason for the extreme mortality rate. This might be particularly expected in South America, because of the serial founder effect – in which a small number of people move away from their home community and start their own, repeated over and over again, all the way across Beringia and down North America, into South America9.

That said, this theory is considered unlikely today1. For one, the immune systems of native peoples of the Americas react similarly to vaccines as the immune systems of Europeans10. For another, groups in the Americas also had unusually high mortality from other European diseases (influenza, measles, etc), but this mortality decreased relatively quickly after first exposure – quickly enough that genetic attributes couldn’t change quickly enough to explain the response10.

Some have also proposed general malnutrition, which would weaken the immune system and make it harder to fight off smallpox. This doesn’t seem to have been a factor1. Scarce food was a fact of life in many Native American groups, but then again, the same was true for European peasants, who still didn’t suffer as much from smallpox.

Africa

Smallpox has had a long history in parts of Africa – the earliest known instance of smallpox infection comes from Egyptian mummies2, and frequent European contact throughout the centuries spread the disease to the parts they interacted with. Various groups in North, East, and West Africa developed their own variolation techniques11.

However, when the disease was introduced to areas it hadn’t existed before, we saw similarly astounding death rates as in the Americas: one source describes mortality rates of 80% among the Griqua people of South Africa. Less quantitatively, it describes how several Hottentot tribes were “wiped out” by the disease, that some tribes in northern Kenya were “almost exterminated”, and that parts of the eastern Congo River basin became “completely depopulated”2.

This makes it sound like smallpox acted similarly in unexposed people in Africa. It also lends another piece of evidence against the genetic predisposition hypothesis – that the disease would act similarly on groups so geographically removed.

Wikipedia also tells me that smallpox was comparably deadly when it was first introduced to various Australasian islands, but I haven’t looked into this further.

Extra

Required reading on humanism, smallpox, and smallpox eradication.


When smallpox arrived in India around 400 AD, it spurred the creation of Shitala, the Hindu goddess of (both causing and curing) smallpox. She is normally depicted on a donkey, carrying a broom for either spreading germs or sweeping out a house, and a bowl of either smallpox germs or of cool water.

The last set of images on this page also seems to be a depiction of the goddess, and captures something altogether different, something more dark and visceral.


Finally, this blog has a Patreon. If you like what you’ve read, consider giving it your support so I can make more of it.

References


  1. Riley, J. C. (2010). Smallpox and American Indians revisited. Journal of the history of medicine and allied sciences65(4), 445-477. 
  2. Fenner, F., Henderson, D. A., Arita, I., Jezek, Z., Ladnyi, I. D., & World Health Organization. (1988). Smallpox and its eradication. 
  3. Acuna-Soto, R., Sthale, D. W., Cleaveland, M. K., & Therrell, M. D. (2002). Megadrought and megadeath in 16th century Mexico. Revista Biomédica13, 289-292. 
  4. Beer, M., & Eisenstat, R. A. (2000). The silent killers of strategy implementation and learning. Sloan management review41(4), 29. 
  5. Houston, C. S., & Houston, S. (2000). The first smallpox epidemic on the Canadian Plains: in the fur-traders’ words. Canadian Journal of Infectious Diseases and Medical Microbiology11(2), 112-115. 
  6. Crosby, A. W. (1976). Virgin soil epidemics as a factor in the aboriginal depopulation in America. The William and Mary Quarterly: A Magazine of Early American History, 289-299. 
  7. Sundstrom, L. (1997). Smallpox Used Them Up: References to Epidemic Disease in Northern Plains Winter Counts, 1714-1920. Ethnohistory, 305-343. 
  8. MacPhee, R. D., & Greenwood, A. D. (2013). Infectious disease, endangerment, and extinction. International journal of evolutionary biology, 2013. 
  9. Snow, D. R., & Lanphear, K. M. (1988). European contact and Indian depopulation in the Northeast: the timing of the first epidemics. Ethnohistory, 15-33. 
  10. Walker, R. S., Sattenspiel, L., & Hill, K. R. (2015). Mortality from contact-related epidemics among indigenous populations in Greater Amazonia. Scientific reports5, 14032. 
  11. Herbert, E. W. (1975). Smallpox inoculation in Africa. The Journal of African History16(4), 539-559. 

[OPEN QUESTION] Insect declines: Why aren’t we dead already?

One study on a German nature reserve found insect biomass (e.g., kilograms of insects you’d catch in a net) has declined 75% over the last 27 years. Here’s a good summary that answered some questions I had about the study itself.

Another review study found that, globally, invertebrate (mostly insect) abundance has declined 35% over the last 40 years.

Insects are important, as I’ve been told repeatedly (and written about myself). So this news begs a very important and urgent question:

Why aren’t we all dead yet?

This is an honest question, and I want an answer. (Readers will know I take catastrophic possibilities very seriously.) Insects are among the most numerous animals on earth and central to our ecosystems, food chains, etcetera. 35%+ lower populations are the kind of thing where, if you’d asked me to guess the result in advanced, I would have expected marked effects on ecosystems. By 75% declines – if the German study reflects the rest of the world to any degree – I would have predicted literal global catastrophe.

Yet these declines have been going on for apparently decades apparently consistently, and the biosphere, while not exactly doing great, hasn’t literally exploded.

So what’s the deal? Any ideas?

Speculation/answers welcome in the comments. Try to convey how confident you are and what your sources are, if you refer to any.

(If your answer is “the biosphere has exploded already”, can you explain how, and why that hasn’t changed trends in things like global crop production or human population growth? I believe, and think most other readers will agree, that various parts of ecosystems worldwide are obviously being degraded, but not to the degree that I would expect by drastic global declines in insect numbers (especially compared to other well-understood factors like carbon dioxide emissions or deforestation.) If you have reason to think otherwise, let me know.)


Sidenote: I was going to append this with a similar question about the decline in ocean phytoplankton levels I’d heard about – the news that populations of phytoplankton, the little guys that feed the ocean food chain and make most of the oxygen on earth, have decreased 40% since 1950.

But a better dataset, collected over 80 years with consistent methods, suggests that phytoplankton have actually increased over time. There’s speculation that the appearance of decrease in the other study may have been because they switched measurement methods partway through. An apocalypse for another day! Or hopefully, no other day, ever.


Also, this blog has a Patreon. If you like my work, consider incentivizing me to make more!

An invincible winter

[Picture in public domain, taken by Jon Sullivan]

Early September brought Seattle what were to be some of the hottest days of the summer. For weeks, people had been turning on fans, escaping to cooler places to spend the day, buying out air conditioners, which most of the city didn’t own. I cowered in my room with an AC unit on loan from a friend lodged in the window, only going out walking when the sun had set.

That week, Eastern Washington was burning. It does that every summer. But this year, a lot of Eastern Washington was burning. Say it with me – 2017 was one of the worst fire years on record. That week, the ash from the fires drifted over Seattle. You smelled smoke everywhere in the city. The sky was gray. At sunrise and sunset, the sun was blood-red. One day, gray ash drifted down from the sky, the exact size of snowflakes. It dusted the cars and kept falling through the afternoon.

That day, people said the weather was downright apocalyptic. They weren’t entirely wrong.

Many people aren’t clear on what exactly a nuclear winter is. The mechanic is straightforward. When cities burn in the extreme heat of a nuclear blast – and we do mean cities, plural, most nuclear exchange scenarios involve multiple strikes for strategic reasons – they turn into soot, and the soot floats up. If enough ash from burned cities reaches the stratosphere, the upper layer of the atmosphere, it stays there for a long time. The ash clouds blot out the sun, cool the earth, and choke off the growth of crops. Within weeks, agriculture grinds to a halt.

There’s a lot of uncertainty over nuclear winter. But by one estimate, the detonation of less than 1% of the world’s nuclear arsenal – a fairly small war – could drop the temperatures by five degrees Celsius, and warm up slowly again over twenty years. The ozone layer would thin. Less rain would fall. Two billion people would starve.

On Tuesday and Wednesday that week, the temperature was predicted to reach over 100 degrees. It didn’t. The particulates in the air blocked enough of the sun’s heat that it barely hit the 90’s. Pedestrians didn’t quite breathe easier, but did sweat less. Our own tiny, toy model taste of a nuclear apocalypse.


I’d been feeling strange for the last few weeks, unrelatedly, and sitting at my desk for hours, my mind did a lot of wandering. I hoped things would be looking up – I’d just gotten back from an exciting conference with good friends, and also from seeing the solar eclipse.

I’d made the pilgrimage with friends. We drove for hours, east across the mountains the week before they burned. We crossed the Colombia River into Oregon, and finally, drove up a winding dirt road to a wide clearing with a small pond. I studied for the GRE in the shadows of dry pines. We played tug-of-war with the crayfish and watched the mayflies dance above the pond. The morning of, the sun climbed in the sky, and I had never appreciated how invisible the new moon is, or how much light the sun puts out – even when it was half-gone, we still had to peer through black plastic glasses to confirm that something had changed. But soon, it became impossible not to notice.

I kept thinking about what state I would have been thrown into if I hadn’t known the mechanism of an eclipse – how deep the pits of my spiritual terror could go. Whether it would be limited by biology or belief. As it is, it was only sort of spiritually terrifying, in a good way. The part of my brain that knew what was happening had spread that knowledge to all the other parts well, so I could run around in excitement and really appreciate the chill in the air, the eerie distortion of shadows into slivers, and finally, the moon sealing off the sun.

The solar corona.

The sunset-colored horizon all around the rim of the sky.

Stars at midday.

We left after the daylight returned, but while the moon was still easing away, eager to beat the crowds back to the city. I thought about the mayflies in the pond, and their brief lives – the only adults in hundreds of generations to see the sun, see the stars, and then see the sun again.

I thought something might shake loose in my brain. Things should have been looking up, but the adventures had scarcely touched the inertia. Oh, right, I had also been thinking a lot about the end of the world.

I wonder about the mental health of people who work in existential risk. I think it must vary. I know people who are terrified on a very emotional and immediate level, and I know people who clearly believe it’s bad but don’t get anxious over it, and aren’t inclined to start. I can’t blame them. I used to be more of the former, and now I’m not sure if it’s eased up or if I’m just not thinking about things that viscerally scare me anymore. I’m not sure the existential terror can tip you towards something you weren’t predisposed to. In my case, I don’t think the mental fog was from it. But the backdrop of existential horror certainly lent it an interesting flavor.


It’s late October now. I’ve pulled out the flannel and the space heater and the second blanket for the bed. When I went jogging, my hands got numb. I don’t mind – I like autumn, I like the descent into winter, heralded by rain and red leaves and darkness, and the trappings of horror and then of the harvest. Peaches in the summertime, apples in the fall. The seasons have a comforting rhyme to them.

That strange inertia hasn’t quite lifted, but I’m working on it. Meanwhile, the world continues to cant sideways. When we arranged the Stanislav Petrov day party in Seattle this year, to celebrate the day a single man decided not to start World War 3, I wondered if we should ease up on the “nuclear war is a terrifying prospect” theme we had laid on last year. I thought that had probably been on people’s minds already.

So geopolitical tensions are rising, and have been rising. The hemisphere gets colder. Not quite out of nostalgia, my mind keeps flickering back to last month, to not-quite-a-hundred-degrees Seattle, to the red sun.

There’s a beautiful quote from Albert Camus: “In the midst of winter, I found there was, within me, an invincible summer.” That Tuesday, like the momentary pass of the moon over the sun in mid-day, in the height of summer, I saw the shadow of a nuclear winter.



For a more detailed exploration of the mechanics of nuclear winter and why we need more research, look at this piece from Global Risk Research Network.

What do you do if a nuclear blast is going to go off near you? Read this piece. Maybe read it beforehand.

What do you do if you don’t want a nuclear blast to go off near you? The Ploughshares Fund is one of the canonical organizations funding work on reducing risks from nuclear weapons. You might also be interested in Physicians for Social Responsibility and the Nuclear Threat Initiative.

This blog has a Patreon. If you like what you’ve read, consider giving it a look.

Evolutionary Innovation as a Global Catastrophic Risk

(This is an extended version of the talk I have at EA Global San Francisco 2017. Long-time readers will recognize it as an updated version of a post I wrote last year. It was wonderful meeting people there!)

graph.png

This is a graph of extinction events over the history of animal life.

There are five canonical major extinction events that have occurred since the evolution of multicellular life. Biotic replacement has been hypothesized as the major mechanism for two of them: the late Devonian extinction and the Permian-Triassic extinction. There are three other major events – the Great Oxygenation Event, End Ediacaran extinction, and the Anthropocene / Quaternary extinction.

Let’s look at four of them. The first actually occurs right before this graph starts.

I decided not to discuss the Great Oxygenation Event in the talk itself, but it’s also an example – photosynthetic cyanobacteria evolved and started pumping oxygen into the atmosphere, which after filling up oxygen sinks in rocks, flooded into the air and poisoned many of the anaerobes, leading to the “oxygen die-off” and the “rusting of the earth.” I excluded it because A) it wasn’t about multicellular life, which, let’s face it, is much more relevant and interesting, and B) I believe it happened over such a long amount of time as to be not worth considering on the same scale as the others.

(I was going to jokingly call these “animal x-risks”, but figured that might confuse people about that the point of the talk was.)

The End-Ediacaran extinction

ediacaran

“Disckonsia Costata” by Verisimilius is licensed under CC BY-SA 3.0

We don’t know much about Precambrian life, but it’s known as the “Garden of Ediacara” and seems to have been a peaceful time.

The Ediacaran sea floor was covered in a mat of algae and bacteria, and ‘critters’ – some were definitely animals, others we’re not sure – ate or lived on the mats. There were tunneling worms, the limpets, some polyps, and the sand-filled curiosities termed “vendozoans”. They may have been single enormous cells like today’s xenophylophores, with the sand giving them structural support. The fiercest animal is described as a “soft limpet” that eats microbes. They don’t seem to have had predators, and this period is sometimes known as the “Garden of Ediacara”. (1)

At 542 million years ago, something happens – the Cambrian explosion. In a very short 5 million years, a variety of animals evolve in a short window.

Molluscs, trilobites and other arthropods, a creative variety of worms eventually including the delightful Hallucigenia, and sponges exploded into the Cambrian. They’re faster and smarter than anything that’s ever existed. The peaceful Ediacaran critters are either outcompeted or gobbled up, and vanish from the fossil record. The first shelled animals indicate that predation had arrived, and that the gates of the Garden of Ediacara had closed forever.

The end-Devonian extinction

Jump forward a few million years – 50% of genuses go extinct. Marine species suffered the most in this event, probably due to anoxia.

There’s an unexpected possible culprit – plants around this time made a few evolutionary leaps that began the first forests. Suddenly a lot of trees pumping oxygen into the air lead to global cooling, and large amounts of soil lead to nutrient-rich runoff, which lead to widespread marine anoxia which decimates the ocean.

devonian

Gingko trees, some of the oldest tree lineages alive. Image by Jean-Pol Grandmont, under a CC BY-SA 3.0 license.

We do know that there were a series of extinction events, so forests were probably only a partial cause. The longer climate trend around the extinction was global warming, so the yo-yoing temperature (from general warming and cooling from plants) likely contributed to extinction. (2) It’s strange to think that the land before 375 million years ago didn’t have much in the way of soil – major root structures contributed to rock wearing away. Plus, once you have some soil, and once the first trees die and contribute their nutrients, you get more soil and more plants – a positive feedback loop.

The specific trifecta of evolutions that let forests take over land: significant root structures, complex vascular systems, and seeds. Plants prior to this were small, lichen-like, and had to reproduce in water. (3)

The Permian-Triassic extinction

96% of marine species go extinct. Most of this happens in a 20,000 year window, which is nothing in geologic time. This is the largest and most sudden prehistoric extinction known.

The cause of this one was confusing for a long time. We know the earth got warmer, or maybe cooler, and that volcanoes were going off, but the timing didn’t quite match up.

Volcanoes were going off for much longer than the extinction, and it looks like die-offs were happening faster than we’d expect from increasing volcanism, or standard climate change cycles. (4) One theory points out that die-offs line up with exponential or super-exponential growth, as in, from a replicating microbe. Remember high school biology?

One theory suggests Methanosarcina, an archaea that evolved the chemical process that turned organic carbon into methane around the same time. Remember those volcanoes? They were spewing enormous amounts of nickel – an important co-factor for that process.

permiantriassic

Methanosarcina, image from Nature

(Methanosarcina appeared to have gotten the gene from a cellulose-digesting bacteria – definitely a neat trick. (5) )

The theory goes that Methanosarcina picked up its new pathway, and flooded the atmosphere with methane, which raised the surface temperature of the oceans to 45 degrees Celsius and killed most life. (2)

This report is a little recent, and it’s certainly unique, so I don’t want to claim that it’s definitely confirmed, or sure on the same level that, say, the Chicxulub impact theory is confirmed. That said, at the time of this writing, the cause of the Permian-Triassic extinction is unclear, and the methanogen theory doesn’t seem to have been majorly criticized or debunked.

Quaternary and Anthropocene extinctions

Finally, I’m going to combine the Quaternary and Anthropocene events. They don’t show up on this chart because the data’s still coming in, but you know the story – maybe you’re an ice-age megafauna, or rainforest amphibian, and you are having a perfectly fine time, until these pretentious monkeys just walk out of the Rift Valley, and turn you into a steak or a corn farm.

anthropocene

Art by Heinrich Harder.

Because of humans, since 1900, extinctions have been happening at about a thousand times the background rate.

(Looking at the original chart, you might notice that the “background” number of extinctions appears to be declining over time – what’s with that? Probably nothing cosmic – more recent species are just more likely to survive to the present day.)

Impacts from evolutionary innovation

You can probably see a common thread by now. These extinctions were caused – at least in part – by natural selection stumbling upon an unusually successful strategy. Changing external conditions, like nickel from volcanoes or other climate change, might contribute by giving an edge to a new adaptation.

    1. In some cases, something evolved that directly competed the others – biotic replacement
    2. In others, something evolved that changed the atmosphere.
    3. I’m going to throw in one more – that any time a species goes extinct due to a new disease, that’s also an evolutionary innovation. Now, as far as we can tell, this is extremely rare in nature, but possible. (7)

Are humans at risk from this?

From natural risk? It seems unlikely. These events are rare and can take on the order of thousands of years or more to unfold, at which point we’d likely be able to do something about it.

That is, as far as we know – the fossil record is spotty. As far as I can tell, we were able to pin the worst of the Permian-Triassic extinction down to 20,000 years only because that’s how narrow the resolution on the fossil band formed at the time was. It might have actually been quicker.

Even determining if an extinction has happened or not, or if the rock just happened to become less good at holding fossils, is a struggle. I liked this paper not really for the details of extinction events (I don’t think the “mass extinctions are periodic” idea is used these days), but for the nitty gritty details of how to pull detailed data out of rocks.

That said, for calibrating your understanding, it seems possible that extinctions from evolutionary innovation are more common than mass extinctions involving asteroids (only one mass extinction has been solidly attributed to an asteroid: the Chicxulub impact that ended the reign of dinosaurs.) That’s not to say large asteroid impacts (bolides) don’t cause smaller extinctions – but one source estimated the bolide:extinction ratio to be 175:1. (2)

Plus, having a brain matters, and I think I can say it’s really unlikely that a better predator (or a new kind of plant) is going to evolve without us noticing. There are some parallels here with, say, artificial intelligence risk, but I think the connection is tenuous enough that it might not be useful.

If we learn that such an event is happening, it’s not clear what we’d do – it depends on specifics.

Synthetic biology

But consider synthetic biology – the thing where we design new organisms and see what happens. As capabilities expand, should we worry about lab escapes on an existential scale? I mean, it has happened in nature.

Evolution has spent billions of years trying to design better and better replicators. And yet, evolutionary innovation catastrophes are still pretty rare.

That said, people have a couple of advantages:

        1. We can do things on purpose. (I mean, a human working on this might not be trying to make a catastrophic geoweapon – but they might still be trying to make a really good replicator.)
        2. We can come up with entirely new things. When natural selection innovates, every incremental step on the way to the final result has to an improvement on what came before. It’s like if you tried to build a footbridge, but at every single step of building it, it had to support more weight than before. We don’t have those constraints – we can just design a bridge and then build it and then have people walk across it. We can design biological systems that nobody has seen before.

This question of if we can design organisms more effective than evolution is still open, and crucial for telling us how concerned we should be about synthetic organisms in the environment.

People are concerned about synthetic biology and the risk of organisms “escaping” from a lab, industrial setting, or medical setting into the environment, and perhaps persisting or causing local damage. They just don’t seem to be worried on an existential level. I’m not sure if they should be, but it seems like the possibility is worth considering.

For instance, a company once almost released large quantities of an engineered bacteria that turned out to produce soil ethanol in large enough quantities to kill all plants in a lab microcosm. It appears that we don’t have reason to think it would have outcompeted other soil biota and actually caused an existential or even a local catastrophe, but it was caught at the last minute and the implications are clearly troubling. (9)


  1. Ediacaran biota: The dawn of animal life in the shadow of giant protists
  2. On the causes of mass extinctions
  3. Terrestrial-Marine Teleconnections in the Devonian: Links between the Evolution of Land Plants, Weathering Processes, and Marine Anoxic Events
  4. The Permo-Triassic extinction
  5. Methanogenic burst in the End-Permian carbon cycle
  6. Natural Die-offs of Large Mammals: Implications for Conservation I’m pretty sure I’ve seen at least a couple other sources mention this, but can’t find them right now. I had Chytridiomycosis in mind as well. This seems like an important research project and obviously has some implications for, say, biology existential risk.
  7. Rather sensationalized description from Cracked.Com