The Germy Paradox – The empty sky: How close did we get to BW usage?

This is the third post in a sequence of blog posts on state biological weapons programs. Others will be linked here as they come out:

1.0 Introduction
2.1 The empty sky: A history of state biological weapons programs
2.2 The empty sky: How close did we get to BW usage?
3.1 Filters: Hard and soft skills
3.2 Filters: A taboo
3.3 Filters: The shadow of nuclear weapons
4.0 Conclusion: Open questions and the future of state BW 

 I’ve heard a lot about “nuclear close calls.” Stanislav Petrov was, at one point, one human and one uncomfortable decision away from initiating an all-out nuclear exchange between the US and the USSR. Then that happened several dozen more times. As described in Part 1, there were quite a few large state biological weapons programs after WWII. Was a similar situation unfolding there, behind the scenes like the nuclear near-misses? Were any biological apocalypses-that-weren’t quietly hidden in the pages of history? What were the actual usage plans for biological weapons, and how close did any of them come to deployment?  

First, to note that there are a few examples of state uses of BW in modernity – the most major being the Japanese BW program’s use of lethal weapons against Chinese civilians in the 1940’s. More recently, the Rhodesian BW program used weapons against their own citizens on small scales in the 1970’s, and South Africa program operatives attempted to sicken a refugee camp’s water supplies with cholera in 1989 (but failed). There are also a few ambiguous incidents, and the possibility of unnoticed usage. These incidents and possibilities are concerning, but are rare and on small scales. 

The claim I assert is that since the 1940’s, there have been no instances of the sort of large-scale usage of BW in international warfare, like that which the imagination conjures upon hearing the phrase “biological weapons.” My research suggests, more tentatively, that not only were there no such uses, but there were no close calls – that even in major state programs that created and stockpiled weapons in quantity, these weapons were virtually never included into war plans, prepared for usage or, especially, used. The strategies they were intended for were were generally vague and retaliatory to offensive uses of biological or nuclear weapons.  This has another bonus: understanding what biological weapons were ever made in mass quantity, or were ever seen to have particular strategic value.

Ways of classifying BW

Note two failure modes which are common in existing literature which tries to extrapolate from past bioweapons plans: first, an overly narrow reference class limited to weapons that were actually used in combat, of which there are very few; second, an overly broad class composed of every pathogen or plan that programs ever experimented with. The latter would include many plans that were impractical or undesirable and eventually abandoned. No literature thus far has focused on this middle ground between these extremes, or analyzed the strategic goals of such usages in depth. 

Addressing BW under a framework of planned usage is rare. Categorizations of weapon usage are also uncommon. Most commonly found is the sorting of BW strategies into the theater of war – strategic, tactical, and operational uses (Koblentz 2009, Leitenberg et al 2012). This categorization system is broadly applicable and not agent-based, and thus may apply to future weapons with novel agents as well as to past programs. On the other hand, three abstract categories are not enough to be predictive, and many usages could plausibly be described as affecting multiple theaters. Any new categorization system should factor in historical strategies and motivations as a reference class to understand possible future uses.

No concrete intentions

To illustrate why considering usage plans is valuable, note that many sophisticated BW programs did not in fact make war plans that incorporated their weapons. In many cases, this is because early conceptual research was time-consuming and unsuccessful, and the results were considered inferior to conventional or nuclear weapons. UK, South African, Canadian, and French programs never resulted in mass-production of a finalized weapon, much less strategies for such a weapon’s use (Balmer 1997, Wheelis et al 2006). The South African and UK programs both explored unusual intended uses (assassinations and fertility control (Purkitt and Burgess 2002), and area-denial, respectively (Wheelis et al 2006), but no weapons were created which would fulfill them. Even the USSR program, which weaponized more agents than any other BW program in history (Leienberg et al 2012), failed to make concrete war plans involving BW (Wheelis et al 2006). The broader intents of these programs are still interesting, but clearly the relative importance of various programs changes when considering concrete war plans. 

“No first use” of BW

Many state biological weapons programs post-WWII, and their strategies for weapons usage, were heavily intertwined with nuclear policy. The UK BW program, since 1946, was guided by a principle of “no first use” that echoes that of their nuclear program – that they would not use biological weapons unless the UK was first attacked with biological or nuclear weapons (Balmer 1997). This was also the original stated purpose of the US program, although the “no first use” doctrine was officially rescinded in 1956 (Wheelis et al 2006). Information revealed after the Iraq BW program survey group investigation in 2004 revealed Saddam Hussein’s intent in 1991 to use BW to retaliate against “unconventional” weapon attacks by the US (“Iraq Survey Group” 2004). The USSR was likely also interested in using weapons after a nuclear exchange with the US (Leitenberg et al 2012). In this way, BW join with nuclear weapons as a facet of mutually-assured destruction dynamics. 

BW have long been considered “the poor man’s nuclear weapon”. Numerous studies and documents have suggested that biological weapons are cheaper per death-caused than nuclear weapons (Koblentz 2004), are relatively simple to construct, and could be developed by less-wealthy states to give them the MAD protection or bargaining capability of a full-fledged nuclear power. This idea has been criticized in several ways – for instance, by Gregory Koblentz, who points out that biological attacks are not reliable in the same way that nuclear attacks are and thus do not fulfil the major goals of deterrence (Koblentz 2009), and by Sonia Ben Ouagrham-Gormley, who observes that biological weapons are in fact significantly more difficult to produce than commonly believed (Ben Ouagrham-Gormley 2014).

Nonetheless, the perception that biological weapons are an alternative to nuclear weapons is widespread, including by powerful figures like Hillary Clinton (Reuters 2011) and Bill Gates (Farmer 2017). In light of this, it is interesting to observe that all three programs for which “no first use” was a primary strategic goal – the UK, US, and Iraq programs – were also nuclear powers or pursuing nuclear weapons.  The UK and US BW programs were indeed later abandoned in favor of developing and relying on on nuclear weapons programs for defense and deterrence (Balmer 1997).

Non-lethal weapons

The popular image of BW as lethal antipersonnel weapons does not describe the whole of BW development, but certainly describes large swathes of it. The USSR, US, and Iraq programs all mass-produced lethal weapons; for example, anthrax-filled munitions. Of these, it seems that the Iraq program was the only one that actually deployed such weapons. Saddam Hussein was particularly interested in anthrax, not just for its virulence, but for its ability to remain active in soil and potentially cause death over many years.

Despite the prominence of the above, both in actuality and in the public eye, a great deal of effort focused on biological weapons has been focused on non-lethal weapons. These may be incapacitating antipersonnel weapons, or weapons with non-human targets such as plants or animals to attack the enemy’s agricultural system. Desiderata between lethal and non-lethal biological weapons has not been well-explored by the literature. It has been suggested that caring for incapacitated soldiers, particularly those who are sick over a long period of time, can be more burdensome on an enemy than in simply burying and replacing killed soldiers (Pappas et al, 2006). While strongly dependent on the pathogen and expected death rates, this is a practical reason these weapons may be preferred. Inflicting nonlethal or economic damage is also seen as more humane than lethal damage, and perhaps more politically acceptable. Political acceptability was, for instance, part of the internal US motivation for using anti-plant chemical weapons rather than human-targeted weapons in the Vietnam War (Pearson 2006). However, humanitarian or political motivations for developing biological weapons have not been described or well-documented in research.

Some of the first modern BW production was for weapons aimed at cattle: before WWII, the US and UK collaborated on producing anthrax-infected cow cakes, meant to be scattered over German fields to damage agriculture. Later, the US program explored agricultural weapons until 1958 (Hay 1999), and two of the twelve weapons finalized by the US program were agricultural – rice blast and wheat stem rust (Wheelis et al 2006). The USSR did not stockpile anti-agriculture weapons, but a variety of them were developed by a sub-program of the BW apparatus code-named Ecology (Koblentz 2009), and production was designed to scale quickly if needed (Alibek 1999). These would not be used in conditions of total war with the US, but in smaller wars with smaller nations (Alibek 1999).            

Both the US and the USSR also invested in incapacitating weapons for use against humans. The USSR weaponized the rarely-lethal Brucella sp. and Burkholderia mallei, as well as Venezuelan equine encephalitis virus (VEEV), which causes flu-like symptoms. The US also invested heavily in incapacitating weapons. Four out of ten standardized antipersonnel weapons produced by the US BW program were not intended to be lethal: Brucella suis, Coxiella burnetti and VEEV, as well as the staphylococcus-derived Enterotoxin B (Wheelis 2006). In what may have been the closest call of major use of a BW since WWII, airplanes were loaded with VEEV-containing weapons during the Cuban missile crisis (Guillemin 2004). This was unauthorized, but nonetheless seems to have come very close to real battlefield usage.           

Here’s a summary of my findings by program:

CountryProgram active periodUsageStrategic goal for weaponsDegree of progress in creation
USSR1925-1990Never usedSomewhat unclear, as second-strike capability (to ensure complete destruction), as a first-strike weapon against small countriesMultiple biological weapons created and stockpiled
UK1942-1956Never usedAs second-strike capabilityNo weapons ever developed to completion
USA1942-1970Never used, nonlethal weapon loaded onto plane in one instanceUnclear, complement to conventional weaponsMultiple biological weapons created and stockpiled
Canada1940’sNever usedUnclearNo weapons ever developed to completion
French1940’sNever usedUnclear No weapons ever developed to completion
Japan1940’sUsed weapons in international warfare.Death, area denial, regular weapon of warMultiple biological weapons created, stockpiled, and used
Israel1948 – ???Probably never usedUnclear, perhaps as deterrenceUnclear
Rhodesia1975-1979Used weapons against dissidents inside the country.As internal tool.Crude weapons created and used (e.g. placing cholera in food and water supplies.)
Iraq1980s-1996Never usedAs deterrence, second-strikeMultiple biological weapons created and stockpiled
South Africa1983-1988One instance of attempted use (failed)Assassination, fertility control, control, possibly as regular weapon of warNo weapons ever developed to completion

[Bold country name = BW used. Italics: BW use attempted or “almost used.”]


Alibek, Kenneth. “The Soviet Union’s Anti-Agricultural Biological Weapons.” Annals of the New York Academy of Sciences 894, no. 1 (December 1, 1999): 18–19.

Balmer, Brian. “The drift of biological weapons policy in the UK 1945–1965.” The Journal of Strategic Studies 20, no. 4 (1997): 115-145.

Farmer, Ben. “Bioterrorism Could Kill More People than Nuclear War, Bill Gates to Warn World Leaders.” The Telegraph, February 17, 2017.

Guillemin, Jeanne. Biological weapons: From the invention of state-sponsored programs to contemporary bioterrorism. Columbia University Press, 2004.

Hay, Alastair. “Simulants, Stimulants and Diseases: The Evolution of the United States Biological Warfare Programme, 1945–60.” Medicine, Conflict and Survival 15, no. 3 (July 1, 1999): 198–214.

“Iraq Survey Group Final Report.” Iraq Survey Group, September 30, 2004.

Koblentz, Gregory. Living Weapons. Ithaca: Cornell University Press, 2009.

Koblentz, Gregory. “Pathogens as weapons: The international security implications of biological warfare.” International security 28, no. 3 (2004): 84-122.

Leitenberg, Milton, Raymond Zilinskas, and Jens Kuhn. The Soviet Biological Weapons Program: A History. Cambridge: Harvard University Press, 2012.

Ouagrham-Gormley, Sonia Ben. Barriers to Bioweapons: The Challenges of Expertise and Organization for Weapons Development. Cornell University Press, 2014.

Pappas G, Panagopoulou P, Christou L, Akritidis N. Brucella as a biological weapon. Cell Mol Life Sci 2006; 63: 2229-36.

Pearson, Alan. “Incapacitating Biochemical Weapons.” The Nonproliferation Review 13, no. 2 (July 1, 2006): 151–88.

Purkitt, Helen E., and Stephen Burgess. “South Africa’s Chemical and Biological Warfare Programme: A Historical and International Perspective.” Journal of Southern African Studies 28, no. 2 (2002): 229–53.

Reuters, Thomas. “Hillary Clinton Warns Terror Threat from Biological Weapons Is Growing.” The National Post, December 7, 2011.

Wheelis, Mark, Lajos Rozsa, and Malcolm Dando. Deadly Cultures: Biological Weapons since 1945. Cambridge, MA: Harvard University Press, 2006.

The Germy Paradox – The empty sky: A history of state biological weapons programs

This is the second post in a sequence of blog posts on state biological weapons programs. Others will be linked here as they come out:

1.0 Introduction
2.1 The empty sky: A history of state biological weapons programs
2.2 The empty sky: How close did we get to BW usage?
3.1 Filters: Hard and soft skills
3.2 Filters: A taboo
3.3 Filters: The shadow of nuclear weapons
4.0 Conclusion: Open questions and the future of state BW 

A lot of ink has been consecrated in describing the history of modern BW programs. Martin et al’s “Chapter 1: History of Biological Weapons: From Poisoned Darts to Intentional Epidemics.” In Medical Aspects of Biological Warfare (2018) [PDF] is a superb short summary with a focus on the agents and weapons explored by these programs. It’s far better than I could write, and relevant to the rest of this sequence. I recommend reading it.

I present a condensed timeline of state BW programs within the last century here as well:

CountryProgram active periodNotes
USSR1925-1990Major expansion and resurgence during approximately 1970.  By far the largest BW program to have ever existed. A great deal of our information on it is gleaned from Ken Alibek’s memoir Biohazard.
Japan1940’sWeapons were extensively tested and used against the Manchurian Chinese population.
USA1942-1970A fairly large and well-run program, similarly technically competent as the USSR program, despite having 10-fold less funding. Ended with the US beginning the Biological Weapons Convention, which came into effect in 1975.
UK1942-1956Worked closely with the US BW program. Later helped introduce the Biological Weapons Convention.
Canada1940’sPartially sponsored/encouraged by the US.
French1940’sDetails are scarce.
Israel1948 – ???Details are scarce.
Rhodesia1975-1979Crude weapons created and used (e.g. placing cholera in food and water supplies.)
Iraq1980s-1996Created in response to the external threat represented by Israel. The Iraq Survey Group Final Report is a comprehensive summary.
South Africa1983-1988Mostly controlled by one person. Focus was on weapons against internal threats and assassination, not large-scale offensive weapons against other nations.


The reference mentioned in the first paragraph is: Martin, James W, George W Christopher, and Edward M Eitzen. “Chapter 1: History of Biological Weapons: From Poisoned Darts to Intentional Epidemics.” In Medical Aspects of Biological Warfare, 20. Textbooks of Military Medicine. Fort Sam Houston: Office of the Surgeon General, Borden Institute, 2018.

Iraq: See the 2004 US Iraq Survey Group Final Report. (Available here.)

Israel: See the Nuclear Threat Initiative’s summary (available here.)

Other country’s details are either in the Martin et al piece linked above, or may be cross-pollinated from other sources.

The Germy Paradox: An Introduction

I’m writing a sequence of blog posts on state biological weapons programs. This is the first post. Others will be linked here as they come out:

1.0 Introduction
2.1 The empty sky: A history of state biological weapons programs
2.2 The empty sky: How close did we get to BW usage?
3.1 Filters: Hard and soft skills
3.2 Filters: A taboo
3.3 Filters: The shadow of nuclear weapons
4.0 Conclusion: Open questions and the future of state BW 

This is a bit of a review of my first year of graduate school. Unintentionally, many of my projects that year revolved around a question I’ve been mulling over for a long time. I’m calling it by a terrible nickname, “the germy paradox”, as a counterpart to the question posed by Richard Fermi on the apparent nonexistence of aliens in a vast and star-filled night sky: “If biological weapons are as cheap and deadly as is everyone seems to fear, then where are they?”      

There are few known state bioweapon (BW) programs, and very few at all since the 1970’s. This is despite greater interest and advances in life sciences, as well as a lot of active concern over biodefense. Why don’t we see many state BW programs? Is this for concrete and technological reasons, for strategic reasons, or for reasons of social unacceptability? If some of these reasons go away, for instance as biotechnologies become cheaper and easier to weaponize (as many, e.g. Mukunda et al 2009, Schmidt 2008, believe they will), how safe will we be then?

The history of the biological weapons taboo is multifaceted and will be addressed in greater detail. But some basic background will help kick off this series and explain the question. Biological and chemical weapons are often considered in the same breath (Smith 2014), referred to in early history as “poison” weapons (Cole 1998). Explicit prohibitions of the use of poison weapons in warfare began in the 1400s (Price 1995). Use of biological (or at the time, “bacteriological”) weapons was formally forbidden in combat by the 1925 Geneva Protocol, but several state BW programs went on to proliferate during WWII and the Cold War (Zanders 2003). These programs often stated that their weapons would be used only in retaliation against a nuclear or biological attack, in a version of the popular nuclear “no first use” policy, and they rarely developed clear strategies for BW wartime usage (see Wheelis and Rózsa 2018).

States today are constrained by the Biological Weapons Convention, an international protocol banning not only usage but development of BW. It was developed by the US in 1972 alongside removal of the US’s own program, and signed by all but a handful of countries (UN Office).

 Today, no nation claims to have a BW program, nor to have used BW for large-scale attacks since WWII.

I use the language of the Fermi Paradox to structure this series. 

The first part of this series, “The Empty Sky,” describes the modern history of biological weapons. It then makes the case that we have never seen large-scale usage or near-usage of these weapons since WWII. 

The second part, “Filters”, borrows from Robin Hanson’s model of “great filters” that stand in the way of planets creating space-faring civilizations, thus creating the Fermi Paradox (Hanson, 1998). This section explores reasons we don’t see biological weapons as much as we might expect.

New posts in this sequence will come out twice a week, on Mondays and Thursdays, until it’s done. (This is to give readers time to keep up.)


  1. Mukunda, Gautam, Kenneth A. Oye, and Scott C. Mohr. “What rough beast? Synthetic biology, uncertainty, and the future of biosecurity.” Politics and the Life Sciences 28, no. 2 (2009): 2-26
  2. Schmidt, Markus. “Diffusion of synthetic biology: a challenge to biosafety.” Systems and synthetic biology 2, no. 1-2 (2008): 1-6.]
  3. Smith, Frank. American biodefense: How dangerous ideas about biological weapons shape national security. Cornell University Press, 2014.
  4. A. Cole, Leonard. “The Poison Weapons Taboo: Biology, Culture, and Policy.” Politics and the Life Sciences 17 (September 1, 1998): 119–32.
  5. Price, Richard. “A Genealogy of the Chemical Weapons Taboo.” International Organization 49, no. 1 (1995): 73–103.
  6. Zanders, Jean Pascal. “International Norms Against Chemical and Biological Warfare: An Ambiguous Legacy.” Journal of Conflict and Security Law 8, no. 2 (October 1, 2003): 391–410.
  7. Wheelis, Mark, and Lajos Rózsa. Deadly cultures: biological weapons since 1945. Harvard University Press, 2009.
  8. “The Biological Weapons Convention.” The United Nations Office at Geneva. Accessed December 11, 2018.
  9. Hanson, Robin. “The Great Filter – Are We Almost Past it?”. 1998. (Web Archive)

Tiddlywiki for organizing notes and research

Happy new school year to my fellow students! With my first year of grad school under my belt, and my sword and shield out for Round 2, I wanted to share a tool that’s helped me on my journey.

Two years ago, my go-to system for organizing my research and writing “citations in 3 different programs” + “pile everything into a haphazard series of google docs and hope for the best”. I figured this wasn’t great. After doing some reading and trying several alternatives, I discovered Tiddlywiki.

Tiddlywiki is an ancient open-source wiki application in the form of an html file. It has all the tools you need to make a wiki in the form of “tiddlers”, self-contained chunks of info that you can tag and link to each other. When you save the wiki, the program and your text all get wrapped up together into the same .html file – it both stores your info and is the program for running the wiki. It works on any web browser, as well as special programs.

I stuck with it and here’s why:

  • Wiki format: Wikis seem really compatible with the way my brain works. If I take notes on a book or article, that source gets its own tiddler on the wiki. They can then get interwoven, crosslinked, expanded upon, etc.
  • Elegant: Does most things I want it to. Easy to link to tiddlers and drag them or other files in from other wikis/folders. The structure is transparent and customizable.
  • Robust: Tiddlywikis from a decade ago are still perfectly functional today. The entire program and dataset lives in one small html file that runs on anything with a web browser.
  • Meta-aesthetics: Feeding all my data to Google is a little worrying. Tiddlywiki, meanwhile, is open-source and runs from your computer. The fact that the program is a quine is really neat.
  • Encryption: Tiddlywikis have an encryption function baked in. I don’t know if it’s very good. Consider using Veracrypt for better security. But if you don’t want to do that, here you go.  This also means you can upload your wikis and backups to cloud services while keeping them encrypted. (Go to “Tools” in the sidebar, then click on the “set password” button. After you set a password, you can look at the .html file text to be sure that, yes, everything is encrypted into nonsense characters.)
  • Customizable: Easily change the color scheme, any text or formatting, the layout, etc. It’s extremely adaptable. You can also install a variety of plugins, though I haven’t felt the need to myself as of yet.
  • Transportable: My wikis live on a flashdrive and can work on any computer. I took all my research with me to and from work every day this summer for an internship.

Things I like less

  • Saving is not obvious. This simplest version is “edit a copy of a blank tiddlywiki in a web browser, save locally to your computer or a flash drive, repeat every time you edit it”, which is kind of a pain.
    • I work on various computers, so my Tiddlywikis are saved on a flash drive. I edit them in web browsers, and save them back to the flash drive when I’m done. I back them up every week.
    • On my Ubuntu laptop, I edit them with the program TiddlyDesktop, which makes saving easier.
  • You can use images, but they get saved as raw code into the html file itself (so every image makes the file that much larger), and there aren’t tools for manipulating them. (There is a cute, tiny, and almost useless drawing program baked in.) I tend to save a few images, like graphs or figures from papers, but wouldn’t personally use Tiddlywiki for image-heavy work.
  • Some features (e.g. spellcheck, in-text search with highlighting) depend on the browser or other program you’re using to edit the wikis.
  • Kind of old-looking, not maximally aesthetic.

The number of wikis you have is up to you. I started with one wiki for a specific writing project and one wiki for work, notes and research. My active Tiddlywikis now include:

  • Grad school material
  • Internship research material
  • General writing, notes, and personal research
  • Writing and worldbuilding/characterization/plot details for a novel
  • Recipe storage
  • Quotes and poetry I like

Your mileage may vary.

How do I try it?

Here’s a nice one to explore as an example, a thesis website in Spanish. Here’s one on philosophy. (Note that you can’t actually edit the versions that appear on the website. You can locally save the whole wiki and changes you make to it, though.)

If you like it, here are some resources to get you started. This is the official website, which has lots of helpful documentation. (Note that it’s also a tiddlywiki!)

Here are some youtube videos I also found helpful.

After making a few tiddlywikis, I found that I kept making the same tweaks to them to get them set up in a way useful for me. In that light, I made a new “blank” or “empty” tiddlywiki that had those changes baked in already.

Here it is: the Eukaryote Writes Blog empty tiddlywiki. You may find it better than the default empty wiki. It comes with a couple new color schemes, a table of contents, and some layout tweaks, among other small changes. 

Other research tools

All hail the exobrain!

I keep track of research citations formally with Zotero, or the tool my work prefers. For informal reading, I’ll also just note the authors and title and/or URL of the source (in my tiddlywiki!) so I can find it later.

For keeping track of time spent working, I’ve gotten some utility out of KanbanFlow. I like the Pomodoro Technique, and KanbanFlow has both pomodoro timers and a nice task-sorting and task-prioritization system built in. I currently don’t worry about tracking time, and use Google Calendar, a bullet journal, and a bastardized Kanban Board variant to keep my brain in order.

Previously, I used the website MarinaraTimer to time pomodoros. I love it for exactly two reasons: the ability to pause pomodoros, and the sound effect “Ominous Woosh”. 

A giant whiteboard for $14 (plus nails)

All my friends love whiteboards, because they’re giant nerds. It’s only a good party when somebody starts trying to illustrating a point on the whiteboard. Also, to-do lists you can’t miss.

This isn’t my idea, but it’s good and I thought I’d share it. Home Depot and Lowes sell a material called “thrify white panelling” or “smooth white hardboard”. It’s cheap – mine was about $14 for a 4-foot-by-8-foot sheet. I asked the store staff to cut the panel in half so it would fit in my car, which they did for free.

It’s easily chipped, so watch out if you want it to look flawless.

Then I stuck it to the wall of my studio apartment. I used some kind of drywall anchor to screw it on, with washers, to get a good grip on the material. (I think using a drywall anchor was overkill, and can’t recommend them for renters until I see how cleanly they come out of the wall – so if you have a better low-damage attachment system, maybe try that first. I wasn’t able to get them to stick with Command velcro strips – the strips kept detaching from the panel material – but other people on the internet seem to have found success in this.)20190707_145520~2

It acts as a whiteboard as-is. Writing gets hard to erase if you leave it up there for a long time, but you can clean it with whiteboard-cleaning-solution, alcohol, or by coloring over the marks with another whiteboard marker and then erasing that. Internet people also report that you can buff the entire surface with turtle wax before hanging it, and this makes it more stain-resistant.

I didn’t do that, and it’s still pretty good, especially for $14.

Let me know if you try this!

Naked mole-rats: A case study in biological weirdness

Epistemic status: Speculative, just having fun. This piece isn’t well-cited, but I can pull up sources as needed – nothing about mole-rats is my original research. A lot of this piece is based on Wikipedia.

When I wrote about “weirdness” in the past, I called marine invertebrates, archaea viruses, and Florida Man stories “predictably weird”. This means I wasn’t really surprised to learn any new wild fact about them. But there’s a sense in which marine invertebrates both are and aren’t weird. I want to try operationalizing “weirdness” as “amount of unpredictability or diversity present in a class” (or “in an individual”) compared to other members of its group.

So in terms of “animals your hear about” – well, you know the tigers, the mice, the bees, the tuna fish, the songbirds, whatever else comes up in your life. But “deep sea invertebrates” seems to include a variety of improbable creatures – a betentacled neon sphere covered in spikes, a six-foot long disconcertingly smooth and flesh-colored worm, bisexual squids, etc. Hey! Weird! That’s weird.

But looking at a phylogenetic tree, we see really quickly that “invertebrates” represent almost the entire animal tree of life.


Invertebrates represent most of the strategies that animals have attempted on earth, and certainly most of the animals on earth. Vertebrates are the odd ones out.

But you know which animals are profoundly weird, no matter which way you look at it? Naked mole rats. Naked mole-rats have like a dozen properties that are not just unusual, not just strange, but absolutely batshit. Let’s review.

1. They don’t age

What? Well, for most animals, their chance of dying goes up over time. You can look at a population and find something like this:


Mole-rats, they have the same chance of dying at any age. Their graph looks like this:


They’re joined, more or less, by a few species of jellyfish, flatworms, turtles, lobsters, and at least one fish.

They’re hugely long-lived compared to other rodents, seen in zoos at 30+ years old compared to the couple brief years that rats get.

2. They don’t get cancer

Cancer generally seems to be the curse of multicellular beings, but naked mole-rats are an exception. A couple mole-rats have developed cancer-like growths in captivity, but no wild mole-rat has ever been found with cancer.

3. They don’t feel some forms of pain

Mole-rats don’t respond to acid or capsaicin, which is, as far as I know, unique among mammals.

4. They’re eusocial

Definitely unique among mammals. Like bees, ants, and termites, naked mole-rats have a single breeding “queen” in each colony, and other “worker” individuals exist in castes that perform specific tasks. In an evolutionary sense, this means that the “unit of selection” for the species is the queen, not any individual – the queen’s genes are the ones that get passed down.

They’re also a fascinating case study of an animal whose existence was deduced before it was proven. Nobody knew about eusocial mammals for a long time. In 1974, entomologist Richard Alexander, who studied eusocial insects, wrote down a set of environmental characteristics he thought would be required for a eusocial mammal to evolve. Around 1981 and the next decade, naked mole-rats – a perfect match for his predictions – were found to be eusocial.

5. They don’t have fur

Obviously. But aside from genetic flukes or domesticated breeds, that puts them in a small unlikely group with only some marine mammals, rhinoceros, hippos, elephants, one species of boar, and… us.


You and this entity have so much in common.

6. They’re able to survive ridiculously low oxygen levels

It uses very little oxygen during normal metabolism, much less than comparable-sized rodents, and it can survive for hours at 5% oxygen (a quarter of normal levels.)

7. Their front teeth move back and forth like chopsticks

I’m not actually sure how common this is in rodents. But it really weirded me out.

8. They have no regular sleep schedule

This is weird, because jellyfish have sleep schedules. But not mole-rats!

9. They’re cold-blooded

They have basically no ability to adjust their body temperature internally, perhaps because their caves tend to be rather constant temperatures. If they need to be a different temperature, they can huddle together, or move to a higher or lower level in their burrow.

All of this makes me think that mole-rats must have some underlying unusual properties which lead to all this – a “weirdness generator”, if you will.

A lot of these are connected to the fact that mole rats spend almost their entire lives underground. There are lots of burrowing animals, but “almost their entire” is pretty unusual – they don’t surface to find food, water, or (usually) mates. (I think they might only surface when digging tunnels and when a colony splits.) So this might explain (8) – no need for a sleep schedule when you can’t see the sun. It also seems to explain (5) and (9), because thermoregulation is unnecessary when they’re living in an environment that’s a pretty constant temperature.

It probably explains (6) because lower burrow levels might have very little oxygen most of the time, although there’s some debate about this – their burrows might actually be pretty well ventilated.

And Richard Alexander’s 12 postulates that would lead to a eusocial vertebrate – plus some other knowledge of eusociality – suggests that this underground climate, when combined with the available lifestyle and food source of a molerat, should lead to eusociality.

It might also be the source of (2) and (3) – people have theorized that higher CO2 or lower oxygen levels in burrows might reduce DNA damage or related to neuron function or something. (This would also explain why only mole-rats in captivity have had tumors, since they’re kept at atmospheric oxygen levels.) These still seem to be up in the air, though. Mole-rats clearly have a variety of fascinating biochemical tricks that are still being understood.

So there’s at least one “weirdness generator” that leads to all of these strange mole-rat properties. There might be more.

I’m pretty sure it’s not the chopstick teeth (7), at least – but as with many predictions one could make about mole rats, I could easily be wrong.


To watch some naked mole-rats going about their lives, check out the Pacific Science Center’s mole-rat live camera. It’s really fun, if a writhing mass of playful otters that are also uncooked hotdogs sounds fun to you.


Small animals have enormous brains for their size

One thing that surprised me when working on How Many Neurons Are There was the number of neurons in the brains of very small animals.

Let’s look a classic measurement, the brain-mass:body-mass ratio.* Smarter animals generally have larger brain sizes for their body mass, compared to animals of similar size. Among large animals, humans have famously enormous brains for our size – the highest of any large animal, it seems. But as we look at smaller animals, that ratio goes up again. A mouse has a comparable brain:body-mass ratio to a human. Getting even smaller, insects have higher brain:body-mass ratios than any vertebrate we know of: more like 1 in 6.

But brain mass isn’t quite what we want – brains are mostly water, and there are a lot of non-neuron cells in brains. Conveniently, I also have a ton of numbers put together on number of neurons. (Synapse counts might be better, but those are hard to come by for different species. Ethology would also be interesting.)

And the trend is also roughly true for neuron-count:body-mass. Humans do have unusually high numbers of neurons per kilogram than other animals, but far, far fewer than, for instance, a small fish or an ant.


If you believe some variation on one of the following:

  • Different species have moral worth in proportion to how many neurons they have
  • Different animal species have moral worth in proportion to how smart they are
  • Different species have moral worth in proportion to the amount of complex thought they can do
  • Different species have moral worth in proportion to how much they can learn**

…then this explanation is an indication that insects and other small animals have much more moral worth than their small size suggests.

How much more?

Imagine, if you will, a standard 5-gallon plastic bucket.


Now imagine that bucket contains 300,000 ants – about two pounds.*** Or a kilogram, if you prefer.

Imagine the bucket. Imagine the equivalent of a couple large apples inside it.


A bucket. Two pounds of ants.

Those ants, collectively, have as many neurons as you do.


(Graphic design is my passion.)

You may notice that an adult human brain actually weighs more than two pounds. What’s going on? Simply, insect brains are marvels of miniaturization. Their brains have a panoply of space-saving tricks, and the physical cells are much smaller.


*Aren’t the cool kids using cephalization quotients rather than brain-mass:body-mass ratios? Yes, when it comes to measurements of higher cognition in vertebrates, cephalization is (as far as I’m aware) thought of as better. But there’s debate about that too. Referring to abilities directly probably makes sense for assessing abilities. I don’t know much about this and it’s not the focus of this piece, anyway.

**Yes, I know that only the first question is directly relevant to this piece, and that all of the others are different. I’m just saying it’s evidence. We don’t have a lot of behavioral data on small animals anyways, but I think we can agree there’s probably a correlation between brain size and cognitive capacity.

***Do two pounds of “normal-sized” ants actually fit in a five-gallon bucket? Yes. I couldn’t find a number for “ant-packing density” in the literature, but thanks to the valiant efforts of David Manheim and Rio Lumapas, it seems to be between 0.3 gallons (5 cups) and 5.5 gallons. It depends on size and whether ants pack more like spheres or more like blocks.


Suggested readings: Brian Tomasik on judging the moral importance of small minds (link is to the most relevant part but the whole essay is good) and on “clock speeds” in smaller animal brains, Suzana Herculano-Houzel on neuron count and intelligence in elephants versus humansHow many neurons are there. (The last piece also contains most of the citations for this week. Ask if you want specific ones.)

This piece is crossposted to the Effective Altruism Forum.

Spaghetti Towers

Here’s a pattern I’d like to be able to talk about. It might be known under a certain name somewhere, but if it is, I don’t know it. I call it a Spaghetti Tower. It shows up in large complex systems that are built haphazardly.

Someone or somethdesidesigning builds the first Part A.


Later, someone wants to put a second Part B on top of Part A, either out of convenience (a common function, just somewhere to put it) or as a refinement to Part A.


Now, suppose you want to tweak Part A. If you do that, you might break Part B, since it interacts with bits of Part A. So you might instead build Part C on top of the previous ones.


And by the time your system looks like this, it’s much harder to tell what changes you can make to an earlier part without crashing some component, so you’re basically relegated to throwing another part on top of the pile.


I call these spaghetti towers for two reasons: One, because they tend to quickly take on circuitous knotty tangled structures, like what programmers call “spaghetti code”. (Part of the problem with spaghetti code is that it can lead to spaghetti towers.)

Especially since they’re usually interwoven in multiple dimensions, and thus look more like this:


“Can you just straighten out the yellow one without touching any of the others? Thanks.”

Second, because shortsightedness in the design process is a crucial part of spaghetti machines. In order to design a spaghetti system, you throw spaghetti against a wall and see if it sticks. Then, when you want to add another part, you throw more spaghetti until it sticks to that spaghetti. And later, you throw more spaghetti. So it goes. And if you decide that you want to tweak the bottom layer to make it a little more useful – which you might want to do because, say, it was built out of spaghetti – without damaging the next layers of gummy partially-dried spaghetti, well then, good luck.

Note that all systems have load-bearing, structural pieces. This does not make them spaghetti towers. The distinction about spaghetti towers is that they have a lot of shoddily-built structural components that are completely unintentional. A bridge has major load-bearing components – they’re pretty obvious, strong, elegant, and efficiently support the rest of the structure. A spaghetti tower is more like this.


The motto of the spaghetti tower is “Sure, it works fine, as long as you never run lukewarm water through it and turn off the washing machine during thunderstorms.” || Image from the always-delightful r/DiWHY.

Where do spaghetti towers appear?

  • Basically all of biology works like this. Absolutely all of evolution is made by throwing spaghetti against walls and seeing what sticks. (More accurately, throwing nucleic acid against harsh reality and seeing what successfully makes more nucleic acid.) We are 3.5 billion years of hacks in fragile trench coats.
    • Scott Star Codex describes the phenomenon in neurotransmitters, but it’s true for all of molecular biology:

You know those stories about clueless old people who get to their Gmail account by typing “Google” into Bing, clicking on Google in the Bing search results, typing “Gmail” into Google, and then clicking on Gmail in the Google search results?

I am reading about serotonin transmission now, and everything in the human brain works on this principle. If your brain needs to downregulate a neurotransmitter, it’ll start by upregulating a completely different neurotransmitter, which upregulates the first neurotransmitter, which hits autoreceptors that downregulate the first neurotransmitter, which then cancel the upregulation, and eventually the neurotransmitter gets downregulated.

Meanwhile, my patients are all like “How come this drug that was supposed to cure my depression is giving me vision problems?” and at least on some level the answer is “how come when Bing is down your grandfather can’t access Gmail?

  • My programming friends tell me that spaghetti towers are near-universal in the codebases of large companies. Where it would theoretically be nice if every function was neatly ordered, but actually, the thing you’re working on has three different dependencies, two of which are unmaintained and were abandoned when the guy who built them went to work at Google, and you can never be 100% certain that your code tweak won’t crash the site.
  • I think this also explains some of why bureaucracies look and act the way they do, and are so hard to change.

I think there are probably a lot of examples of spaghetti towers, and they probably have big ramifications for things like, for instance, what systems evolution can and can’t build.

I want to do a much deeper and more thoughtful analysis about what exactly the implications here are, but this has been kicking around my brain for long enough and all I want to do is get the concept out there.

Does this feel like a meaningful concept? Where do you see spaghetti towers?

Crossposted to LessWrong.

Happy solstice from Eukaryote Writes Blog. Here’s a playlist for you (or listen to Raymond Arnold’s Secular Solstice music.)

Tip: use a digital packing list

Photo by Jean-Philippe Boulet, under a CC BY-SA 3.0 license.

I have a Google spreadsheet I’ve used for the past three and a half years when travelling. It has everything I regularly need on multi-day trips, space for extra items to add on specific trips, and checkboxes.

To pack, I lay out my suitcase and backpack, and throw things into them while referring to my list. Once everything on the spreadsheet is ticked off, I zip up my suitcases and am ready to go.

It’s very straightforward. You can add or remove items to the spreadsheet over time, and just clear off the extra items for each new trip.

Here’s my spreadsheet. You probably don’t need exactly the same things as me, so feel free to save your own copy and change the lists. Happy trails, friends.

The funnel of human experience

[EDIT: Previous version of this post had some errors. Thanks for jeff8765 for pinpointing the error and esrogs in the comments for bringing it to my attention as well. This has been fixed. Also, I wrote FHI when I meant FLI.]

The graph of the human population over time is also a map of human experience. Think of each year as being “amount of human lived experience that happened this year.” On the left, we see the approximate dawn of the modern human species in 50,000 BC. On the right, the population exploding in the present day.


It turns out that if you add up all these years, 50% of human experience has happened after 1309 AD. 15% of all experience has been experienced by people who are alive right now.

I call this “the funnel of human experience” – the fact that because of a tiny initial population blossoming out into a huge modern population, more of human experience has happened recently than time would suggest.

50,000 years is a long time, but 8,000,000,000 people is a lot of people.


Early human experience: casts of the skulls of the earliest modern humans found in various  continents. Display at the Smithsonian Museum of National History.


If you want to expand on this, you can start doing some Fermi estimates. We as a species have spent…

  • 1,650,000,000,000 total “human experience years”
    • See my dataset linked at the bottom of this post.
  • 7,450,000,000 human years spent having sex
    • Humans spend 0.45% of our lives having sex. 0.45% * [total human experience years] = 7E9 years
  • 52,000,000,000 years spent drinking coffee
    • 500 billion cups of coffee drunk this year x 15 minutes to drink each cup x 100 years* = 5E10 years
      • *Coffee consumption has likely been much higher recently than historically, but it does have a long history. I’m estimating about a hundred years of current consumption for total global consumption ever.
  • 1,000,000,000 years spent in labor
    • 110,000,000,000 billion humans ever x ½ women x 12 pregnancies* x 15 hours apiece = 1.1E9 years
      • *Infant mortality, yo. H/t Ellie and Shaw for this estimate.
  • 417,000,000 years spent worshipping the Greek gods
    • 1000 years* x 10,000,000 people** x 365 days a year x 1 hour a day*** = 4E8 years

      • *Some googling suggested that people worshipped the Greek/Roman Gods in some capacity from roughly 500 BC to 500 AD.
      • **There were about 10 million people in Ancient Greece. This probably tapered a lot to the beginning and end of that period, but on the other hand worship must have been more widespread than just Greece, and there have been pagans and Hellenists worshiping since then.
      • ***Worshiping generally took about an hour a day on average, figuring in priests and festivals? Sure.
  • 30,000,000 years spent watching Netflix
    • 14,000,000 hours/day* x 365 days x 5 years** = 2.92E7 years
      • * Netflix users watched an average of 14 million hours of content a day in 2017.
      • **Netflix the company has been around for 10 years, but has gotten bigger recently.
  • 50,000 years spent drinking coffee in Waffle House

So humanity in aggregate has spent about ten times as long worshiping the Greek gods as we’ve spent watching Netflix.

We’ve spent another ten times as long having sex as we’ve spent worshipping the Greek gods.

And we’ve spent ten times as long drinking coffee as we’ve spent having sex.

I’m not sure what this implies. Here are a few things I gathered from this:

1) I used to be annoyed at my high school world history classes for spending so much time on medieval history and after, when there was, you know, all of history before that too. Obviously there are other reasons for this – Eurocentrism, the fact that more recent events have clearer ramifications today – but to some degree this is in fact accurately reflecting how much history there is.

On the other hand, I spent a bunch of time in school learning about the Greek Gods, a tiny chunk of time learning about labor, and virtually no time learning about coffee. This is another disappointing trend in the way history is approached and taught, focusing on a series of major events rather than the day-to-day life of people.

2) The Funnel gets more stark the closer you move to the present day. Look at science. FLI reports that 90% of PhDs that have ever lived are alive right now. That means most of all scientific thought is happening in parallel rather than sequentially.

3) You can’t use the Funnel to reason about everything. For instance, you can’t use it to reason about extended evolutionary processes. Evolution is necessarily cumulative. It works on the unit of generations, not individuals. (You can make some inferences about evolution – for instance, the likelihood of any particular mutation occurring increases when there are more individuals to mutate – but evolution still has the same number of generations to work with, no matter how large each generation is.)

4) This made me think about the phrase “living memory”. The world’s oldest living person is Kane Tanaka, who was born in 1903. 28% of the entirety of human experience has happened since her birth. As mentioned above, 15% has been directly experienced by living people. We have writing and communication and memory, so we have a flawed channel by which to inherit information, and experiences in a sense. But humans as a species can only directly remember as far back as 1903.

Here’s my dataset. The population data comes from the Population Review Bureau and their report on how many humans ever lived, and from Our World In Data. Let me know if you get anything from this.

Fun fact: The average living human is 30.4 years old.

Wait But Why’s explanation of the real revolution of artificial intelligence is relevant and worth reading. See also Luke Muehlhauser’s conclusions on the Industrial Revolution: Part One and Part Two.

Crossposted to LessWrong.